Sample records for model updating techniques

  1. Numerical model updating technique for structures using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  2. Imputatoin and Model-Based Updating Technique for Annual Forest Inventories

    Treesearch

    Ronald E. McRoberts

    2001-01-01

    The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...

  3. Comparison of two optimization algorithms for fuzzy finite element model updating for damage detection in a wind turbine blade

    NASA Astrophysics Data System (ADS)

    Turnbull, Heather; Omenzetter, Piotr

    2018-03-01

    vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.

  4. Numerical modeling and model updating for smart laminated structures with viscoelastic damping

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan

    2018-07-01

    This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.

  5. Investigating the Impact on Modeled Ozone Concentrations Using Meteorological Fields From WRF With and Updated Four-Dimensional Data Assimilation Approach”

    EPA Science Inventory

    The four-dimensional data assimilation (FDDA) technique in the Weather Research and Forecasting (WRF) meteorological model has recently undergone an important update from the original version. Previous evaluation results have demonstrated that the updated FDDA approach in WRF pr...

  6. DUAL STATE-PARAMETER UPDATING SCHEME ON A CONCEPTUAL HYDROLOGIC MODEL USING SEQUENTIAL MONTE CARLO FILTERS

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin

    Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.

  7. Damage severity assessment in wind turbine blade laboratory model through fuzzy finite element model updating

    NASA Astrophysics Data System (ADS)

    Turnbull, Heather; Omenzetter, Piotr

    2017-04-01

    The recent shift towards development of clean, sustainable energy sources has provided a new challenge in terms of structural safety and reliability: with aging, manufacturing defects, harsh environmental and operational conditions, and extreme events such as lightning strikes wind turbines can become damaged resulting in production losses and environmental degradation. To monitor the current structural state of the turbine, structural health monitoring (SHM) techniques would be beneficial. Physics based SHM in the form of calibration of a finite element model (FEMs) by inverse techniques is adopted in this research. Fuzzy finite element model updating (FFEMU) techniques for damage severity assessment of a small-scale wind turbine blade are discussed and implemented. The main advantage is the ability of FFEMU to account in a simple way for uncertainty within the problem of model updating. Uncertainty quantification techniques, such as fuzzy sets, enable a convenient mathematical representation of the various uncertainties. Experimental frequencies obtained from modal analysis on a small-scale wind turbine blade were described by fuzzy numbers to model measurement uncertainty. During this investigation, damage severity estimation was investigated through addition of small masses of varying magnitude to the trailing edge of the structure. This structural modification, intended to be in lieu of damage, enabled non-destructive experimental simulation of structural change. A numerical model was constructed with multiple variable additional masses simulated upon the blades trailing edge and used as updating parameters. Objective functions for updating were constructed and minimized using both particle swarm optimization algorithm and firefly algorithm. FFEMU was able to obtain a prediction of baseline material properties of the blade whilst also successfully predicting, with sufficient accuracy, a larger magnitude of structural alteration and its location.

  8. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  9. Update to core reporting practices in structural equation modeling.

    PubMed

    Schreiber, James B

    This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling." 1 As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Identification of material parameters for plasticity models: A comparative study on the finite element model updating and the virtual fields method

    NASA Astrophysics Data System (ADS)

    Martins, J. M. P.; Thuillier, S.; Andrade-Campos, A.

    2018-05-01

    The identification of material parameters, for a given constitutive model, can be seen as the first step before any practical application. In the last years, the field of material parameters identification received an important boost with the development of full-field measurement techniques, such as Digital Image Correlation. These techniques enable the use of heterogeneous displacement/strain fields, which contain more information than the classical homogeneous tests. Consequently, different techniques have been developed to extract material parameters from full-field measurements. In this study, two of these techniques are addressed, the Finite Element Model Updating (FEMU) and the Virtual Fields Method (VFM). The main idea behind FEMU is to update the parameters of a constitutive model implemented in a finite element model until both numerical and experimental results match, whereas VFM makes use of the Principle of Virtual Work and does not require any finite element simulation. Though both techniques proved their feasibility in linear and non-linear constitutive models, it is rather difficult to rank their robustness in plasticity. The purpose of this work is to perform a comparative study in the case of elasto-plastic models. Details concerning the implementation of each strategy are presented. Moreover, a dedicated code for VFM within a large strain framework is developed. The reconstruction of the stress field is performed through a user subroutine. A heterogeneous tensile test is considered to compare FEMU and VFM strategies.

  11. Model updating strategy for structures with localised nonlinearities using frequency response measurements

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.

    2018-02-01

    This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.

  12. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  13. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  14. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  15. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  16. Dynamic model updating based on strain mode shape and natural frequency using hybrid pattern search technique

    NASA Astrophysics Data System (ADS)

    Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping

    2018-05-01

    Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.

  17. Model Updating of Complex Structures Using the Combination of Component Mode Synthesis and Kriging Predictor

    PubMed Central

    Li, Yan; Wang, Dejun; Zhang, Shaoyi

    2014-01-01

    Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters. PMID:24634612

  18. Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding

    NASA Astrophysics Data System (ADS)

    Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.

    2018-04-01

    In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.

  19. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  20. Rapid Automated Aircraft Simulation Model Updating from Flight Data

    NASA Technical Reports Server (NTRS)

    Brian, Geoff; Morelli, Eugene A.

    2011-01-01

    Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.

  1. A model-updating procedure to stimulate piezoelectric transducers accurately.

    PubMed

    Piranda, B; Ballandras, S; Steichen, W; Hecart, B

    2001-09-01

    The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.

  2. DAMIT: a database of asteroid models

    NASA Astrophysics Data System (ADS)

    Durech, J.; Sidorin, V.; Kaasalainen, M.

    2010-04-01

    Context. Apart from a few targets that were directly imaged by spacecraft, remote sensing techniques are the main source of information about the basic physical properties of asteroids, such as the size, the spin state, or the spectral type. The most widely used observing technique - time-resolved photometry - provides us with data that can be used for deriving asteroid shapes and spin states. In the past decade, inversion of asteroid lightcurves has led to more than a hundred asteroid models. In the next decade, when data from all-sky surveys are available, the number of asteroid models will increase. Combining photometry with, e.g., adaptive optics data produces more detailed models. Aims: We created the Database of Asteroid Models from Inversion Techniques (DAMIT) with the aim of providing the astronomical community access to reliable and up-to-date physical models of asteroids - i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects, as well as for statistical studies of the whole set. Methods: Most DAMIT models were derived from photometric data by the lightcurve inversion method. Some of them have been further refined or scaled using adaptive optics images, infrared observations, or occultation data. A substantial number of the models were derived also using sparse photometric data from astrometric databases. Results: At present, the database contains models of more than one hundred asteroids. For each asteroid, DAMIT provides the polyhedral shape model, the sidereal rotation period, the spin axis direction, and the photometric data used for the inversion. The database is updated when new models are available or when already published models are updated or refined. We have also released the C source code for the lightcurve inversion and for the direct problem (updates and extensions will follow).

  3. Automatic domain updating technique for improving computational efficiency of 2-D flood-inundation simulation

    NASA Astrophysics Data System (ADS)

    Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.

    2017-12-01

    Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.

  4. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion.

    PubMed

    Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D

    2013-07-01

    To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.

  5. Issues concerning the updating of finite-element models from experimental data

    NASA Technical Reports Server (NTRS)

    Dunn, Shane A.

    1994-01-01

    Some issues concerning the updating of dynamic finite-element models by incorporation of experimental data are examined here. It is demonstrated how the number of unknowns can be greatly reduced if the physical nature of the model is maintained. The issue of uniqueness is also examined and it is shown that a number of previous workers have been mistaken in their attempts to define both sufficient and necessary measurement requirements for the updating problem to be solved uniquely. The relative merits of modal and frequency response function (frf) data are discussed and it is shown that for measurements at fewer degrees of freedom than are present in the model, frf data will be unlikely to converge easily to a solution. It is then examined how such problems may become more tractable by using new experimental techniques which would allow measurements at all degrees of freedom present in the mathematical model.

  6. Diameter Growth Models for Inventory Applications

    Treesearch

    Ronald E. McRoberts; Christopher W. Woodall; Veronica C. Lessard; Margaret R. Holdaway

    2002-01-01

    Distant-independent, individual-tree, diametar growth models were constructed to update information for forest inventory plots measured in previous years. The models are nonlinear in the parameters and were calibrated weighted nonlinear least squares techniques and forest inventory plot data. Analyses of residuals indicated that model predictions compare favorably to...

  7. A response surface methodology based damage identification technique

    NASA Astrophysics Data System (ADS)

    Fang, S. E.; Perera, R.

    2009-06-01

    Response surface methodology (RSM) is a combination of statistical and mathematical techniques to represent the relationship between the inputs and outputs of a physical system by explicit functions. This methodology has been widely employed in many applications such as design optimization, response prediction and model validation. But so far the literature related to its application in structural damage identification (SDI) is scarce. Therefore this study attempts to present a systematic SDI procedure comprising four sequential steps of feature selection, parameter screening, primary response surface (RS) modeling and updating, and reference-state RS modeling with SDI realization using the factorial design (FD) and the central composite design (CCD). The last two steps imply the implementation of inverse problems by model updating in which the RS models substitute the FE models. The proposed method was verified against a numerical beam, a tested reinforced concrete (RC) frame and an experimental full-scale bridge with the modal frequency being the output responses. It was found that the proposed RSM-based method performs well in predicting the damage of both numerical and experimental structures having single and multiple damage scenarios. The screening capacity of the FD can provide quantitative estimation of the significance levels of updating parameters. Meanwhile, the second-order polynomial model established by the CCD provides adequate accuracy in expressing the dynamic behavior of a physical system.

  8. Key science issues in the central and eastern United States for the next version of the USGS National Seismic Hazard Maps

    USGS Publications Warehouse

    Peterson, M.D.; Mueller, C.S.

    2011-01-01

    The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.

  9. Application of firefly algorithm to the dynamic model updating problem

    NASA Astrophysics Data System (ADS)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  10. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion

    PubMed Central

    Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.

    2013-01-01

    Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413

  11. Stochastic filtering for damage identification through nonlinear structural finite element model updating

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.

    2015-03-01

    This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.

  12. Documentation of the dynamic parameter, water-use, stream and lake flow routing, and two summary output modules and updates to surface-depression storage simulation and initial conditions specification options with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steve; LaFontaine, Jacob H.

    2017-10-05

    This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.

  13. Structural Finite Element Model Updating Using Vibration Tests and Modal Analysis for NPL footbridge - SHM demonstrator

    NASA Astrophysics Data System (ADS)

    Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.

    2011-07-01

    This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.

  14. Progress on Updating the 1961-1990 National Solar Radiation Database

    NASA Technical Reports Server (NTRS)

    Renne, D.; Wilcox, S.; Marion, B.; George, R.; Myers, D.

    2003-01-01

    The 1961-1990 National Solar Radiation Data Base (NSRDB) provides a 30-year climate summary and solar characterization of 239 locations throughout the United States. Over the past several years, the National Renewable Energy Laboratory (NREL) has received numerous inquiries from a range of constituents as to whether an update of the database to include the 1990s will be developed. However, there are formidable challenges to creating an update of the serially complete station-specific database for the 1971-2000 period. During the 1990s, the National Weather Service changed its observational procedures from a human-based to an automated system, resulting in the loss of important input variables to the model used to complete the 1961-1990 NSRDB. As a result, alternative techniques are required for an update that covers the 1990s. This paper examines several alternative approaches for creating this update and describes preliminary NREL plans for implementing the update.

  15. [Purity Detection Model Update of Maize Seeds Based on Active Learning].

    PubMed

    Tang, Jin-ya; Huang, Min; Zhu, Qi-bing

    2015-08-01

    Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.

  16. Damage identification in beams using speckle shearography and an optimal spatial sampling

    NASA Astrophysics Data System (ADS)

    Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.

    2016-10-01

    Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.

  17. A heuristic for efficient data distribution management in distributed simulation

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Guha, Ratan K.

    2005-05-01

    In this paper, we propose an algorithm for reducing the complexity of region matching and efficient multicasting in data distribution management component of High Level Architecture (HLA) Run Time Infrastructure (RTI). The current data distribution management (DDM) techniques rely on computing the intersection between the subscription and update regions. When a subscription region and an update region of different federates overlap, RTI establishes communication between the publisher and the subscriber. It subsequently routes the updates from the publisher to the subscriber. The proposed algorithm computes the update/subscription regions matching for dynamic allocation of multicast group. It provides new multicast routines that exploit the connectivity of federation by communicating updates regarding interactions and routes information only to those federates that require them. The region-matching problem in DDM reduces to clique-covering problem using the connections graph abstraction where the federations represent the vertices and the update/subscribe relations represent the edges. We develop an abstract model based on connection graph for data distribution management. Using this abstract model, we propose a heuristic for solving the region-matching problem of DDM. We also provide complexity analysis of the proposed heuristics.

  18. A technique for routinely updating the ITU-R database using radio occultation electron density profiles

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio; Azpilicueta, Francisco; Nava, Bruno

    2013-09-01

    Well credited and widely used ionospheric models, such as the International Reference Ionosphere or NeQuick, describe the variation of the electron density with height by means of a piecewise profile tied to the F2-peak parameters: the electron density,, and the height, . Accurate values of these parameters are crucial for retrieving reliable electron density estimations from those models. When direct measurements of these parameters are not available, the models compute the parameters using the so-called ITU-R database, which was established in the early 1960s. This paper presents a technique aimed at routinely updating the ITU-R database using radio occultation electron density profiles derived from GPS measurements gathered from low Earth orbit satellites. Before being used, these radio occultation profiles are validated by fitting to them an electron density model. A re-weighted Least Squares algorithm is used for down-weighting unreliable measurements (occasionally, entire profiles) and to retrieve and values—together with their error estimates—from the profiles. These values are used to monthly update the database, which consists of two sets of ITU-R-like coefficients that could easily be implemented in the IRI or NeQuick models. The technique was tested with radio occultation electron density profiles that are delivered to the community by the COSMIC/FORMOSAT-3 mission team. Tests were performed for solstices and equinoxes seasons in high and low-solar activity conditions. The global mean error of the resulting maps—estimated by the Least Squares technique—is between and elec/m for the F2-peak electron density (which is equivalent to 7 % of the value of the estimated parameter) and from 2.0 to 5.6 km for the height (2 %).

  19. 77 FR 4808 - Conference on Air Quality Modeling

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-31

    ... update our available modeling tools with state-of-the-science techniques and for the public to offer new... C111, 109 T.W. Alexander Drive, Research Triangle Park, NC 27711. FOR FURTHER INFORMATION CONTACT... Quality Assessment Division, Mail Code C439-01, Research Triangle Park, NC 27711; telephone: (919) 541...

  20. Engineering Graphics Educational Outcomes for the Global Engineer: An Update

    ERIC Educational Resources Information Center

    Barr, R. E.

    2012-01-01

    This paper discusses the formulation of educational outcomes for engineering graphics that span the global enterprise. Results of two repeated faculty surveys indicate that new computer graphics tools and techniques are now the preferred mode of engineering graphical communication. Specifically, 3-D computer modeling, assembly modeling, and model…

  1. An Adaptive Pheromone Updation of the Ant-System using LMS Technique

    NASA Astrophysics Data System (ADS)

    Paul, Abhishek; Mukhopadhyay, Sumitra

    2010-10-01

    We propose a modified model of pheromone updation for Ant-System, entitled as Adaptive Ant System (AAS), using the properties of basic Adaptive Filters. Here, we have exploited the properties of Least Mean Square (LMS) algorithm for the pheromone updation to find out the best minimum tour for the Travelling Salesman Problem (TSP). TSP library has been used for the selection of benchmark problem and the proposed AAS determines the minimum tour length for the problems containing large number of cities. Our algorithm shows effective results and gives least tour length in most of the cases as compared to other existing approaches.

  2. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  3. Fast Updating National Geo-Spatial Databases with High Resolution Imagery: China's Methodology and Experience

    NASA Astrophysics Data System (ADS)

    Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.

    2014-04-01

    Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.

  4. Using Four Downscaling Techniques to Characterize Uncertainty in Updating Intensity-Duration-Frequency Curves Under Climate Change

    NASA Astrophysics Data System (ADS)

    Cook, L. M.; Samaras, C.; McGinnis, S. A.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.

  5. Spatiotemporal access model based on reputation for the sensing layer of the IoT.

    PubMed

    Guo, Yunchuan; Yin, Lihua; Li, Chao; Qian, Junyan

    2014-01-01

    Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model.

  6. Knowledge structure representation and automated updates in intelligent information management systems

    NASA Technical Reports Server (NTRS)

    Corey, Stephen; Carnahan, Richard S., Jr.

    1990-01-01

    A continuing effort to apply rapid prototyping and Artificial Intelligence techniques to problems associated with projected Space Station-era information management systems is examined. In particular, timely updating of the various databases and knowledge structures within the proposed intelligent information management system (IIMS) is critical to support decision making processes. Because of the significantly large amounts of data entering the IIMS on a daily basis, information updates will need to be automatically performed with some systems requiring that data be incorporated and made available to users within a few hours. Meeting these demands depends first, on the design and implementation of information structures that are easily modified and expanded, and second, on the incorporation of intelligent automated update techniques that will allow meaningful information relationships to be established. Potential techniques are studied for developing such an automated update capability and IIMS update requirements are examined in light of results obtained from the IIMS prototyping effort.

  7. An Overview of NASA's Oribital Debris Environment Model

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2010-01-01

    Using updated measurement data, analysis tools, and modeling techniques; the NASA Orbital Debris Program Office has created a new Orbital Debris Environment Model. This model extends the coverage of orbital debris flux throughout the Earth orbit environment, and includes information on the mass density of the debris as well as the uncertainties in the model environment. This paper will give an overview of this model and its implications for spacecraft risk analysis.

  8. Efficient Ensemble State-Parameters Estimation Techniques in Ocean Ecosystem Models: Application to the North Atlantic

    NASA Astrophysics Data System (ADS)

    El Gharamti, M.; Bethke, I.; Tjiputra, J.; Bertino, L.

    2016-02-01

    Given the recent strong international focus on developing new data assimilation systems for biological models, we present in this comparative study the application of newly developed state-parameters estimation tools to an ocean ecosystem model. It is quite known that the available physical models are still too simple compared to the complexity of the ocean biology. Furthermore, various biological parameters remain poorly unknown and hence wrong specifications of such parameters can lead to large model errors. Standard joint state-parameters augmentation technique using the ensemble Kalman filter (Stochastic EnKF) has been extensively tested in many geophysical applications. Some of these assimilation studies reported that jointly updating the state and the parameters might introduce significant inconsistency especially for strongly nonlinear models. This is usually the case for ecosystem models particularly during the period of the spring bloom. A better handling of the estimation problem is often carried out by separating the update of the state and the parameters using the so-called Dual EnKF. The dual filter is computationally more expensive than the Joint EnKF but is expected to perform more accurately. Using a similar separation strategy, we propose a new EnKF estimation algorithm in which we apply a one-step-ahead smoothing to the state. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. Unlike the classical filtering path, the new scheme starts with an update step and later a model propagation step is performed. We test the performance of the new smoothing-based schemes against the standard EnKF in a one-dimensional configuration of the Norwegian Earth System Model (NorESM) in the North Atlantic. We use nutrients profile (up to 2000 m deep) data and surface partial CO2 measurements from Mike weather station (66o N, 2o E) to estimate different biological parameters of phytoplanktons and zooplanktons. We analyze the performance of the filters in terms of complexity and accuracy of the state and parameters estimates.

  9. Tracking with time-delayed data in multisensor systems

    NASA Astrophysics Data System (ADS)

    Hilton, Richard D.; Martin, David A.; Blair, William D.

    1993-08-01

    When techniques for target tracking are expanded to make use of multiple sensors in a multiplatform system, the possibility of time delayed data becomes a reality. When a discrete-time Kalman filter is applied and some of the data entering the filter are delayed, proper processing of these late data is a necessity for obtaining an optimal estimate of a target's state. If this problem is not given special care, the quality of the state estimates can be degraded relative to that quality provided by a single sensor. A negative-time update technique is developed using the criteria of minimum mean-square error (MMSE) under the constraint that only the results of the most recent update are saved. The performance of the MMSE technique is compared to that of the ad hoc approach employed in the Cooperative Engagement Capabilities (CEC) system for processing data from multiple platforms. It was discovered that the MMSE technique is a stable solution to the negative-time update problem, while the CEC technique was found to be less than desirable when used with filters designed for tracking highly maneuvering targets at relatively low data rates. The MMSE negative-time update technique was found to be a superior alternative to the existing CEC negative-time update technique.

  10. Systematic Review: A Reevaluation and Update of the Integrative (Trajectory) Model of Pediatric Medical Traumatic Stress.

    PubMed

    Price, Julia; Kassam-Adams, Nancy; Alderfer, Melissa A; Christofferson, Jennifer; Kazak, Anne E

    2016-01-01

    The objective of this systematic review is to reevaluate and update the Integrative Model of Pediatric Medical Traumatic Stress (PMTS; Kazak et al., 2006), which provides a conceptual framework for traumatic stress responses across pediatric illnesses and injuries. Using established systematic review guidelines, we searched PsycINFO, Cumulative Index to Nursing and Allied Health Literature, and PubMed (producing 216 PMTS papers published since 2005), extracted findings for review, and organized and interpreted findings within the Integrative Model framework. Recent PMTS research has included additional pediatric populations, used advanced longitudinal modeling techniques, clarified relations between parent and child PMTS, and considered effects of PMTS on health outcomes. Results support and extend the model's five assumptions, and suggest a sixth assumption related to health outcomes and PMTS. Based on new evidence, the renamed Integrative Trajectory Model includes phases corresponding with medical events, adds family-centered trajectories, reaffirms a competency-based framework, and suggests updated assessment and intervention implications. © The Author 2015. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. An integrated approach for updating cadastral maps in Pakistan using satellite remote sensing data

    NASA Astrophysics Data System (ADS)

    Ali, Zahir; Tuladhar, Arbind; Zevenbergen, Jaap

    2012-08-01

    Updating cadastral information is crucial for recording land ownership and property division changes in a timely fashioned manner. In most cases, the existing cadastral maps do not provide up-to-date information on land parcel boundaries. Such a situation demands that all the cadastral data and parcel boundaries information in these maps to be updated in a timely fashion. The existing techniques for acquiring cadastral information are discipline-oriented based on different disciplines such as geodesy, surveying, and photogrammetry. All these techniques require a large number of manpower, time, and cost when they are carried out separately. There is a need to integrate these techniques for acquiring cadastral information to update the existing cadastral data and (re)produce cadastral maps in an efficient manner. To reduce the time and cost involved in cadastral data acquisition, this study develops an integrated approach by integrating global position system (GPS) data, remote sensing (RS) imagery, and existing cadastral maps. For this purpose, the panchromatic image with 0.6 m spatial resolution and the corresponding multi-spectral image with 2.4 m spatial resolution and 3 spectral bands from QuickBird satellite were used. A digital elevation model (DEM) was extracted from SPOT-5 stereopairs and some ground control points (GCPs) were also used for ortho-rectifying the QuickBird images. After ortho-rectifying these images and registering the multi-spectral image to the panchromatic image, fusion between them was attained to get good quality multi-spectral images of these two study areas with 0.6 m spatial resolution. Cadastral parcel boundaries were then identified on QuickBird images of the two study areas via visual interpretation using participatory-GIS (PGIS) technique. The regions of study are the urban and rural areas of Peshawar and Swabi districts in the Khyber Pakhtunkhwa province of Pakistan. The results are the creation of updated cadastral maps with a lot of cadastral information which can be used in updating the existing cadastral data with less time and cost.

  12. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  13. Numerical simulation of groundwater flow at Puget Sound Naval Shipyard, Naval Base Kitsap, Bremerton, Washington

    USGS Publications Warehouse

    Jones, Joseph L.; Johnson, Kenneth H.; Frans, Lonna M.

    2016-08-18

    Information about groundwater-flow paths and locations where groundwater discharges at and near Puget Sound Naval Shipyard is necessary for understanding the potential migration of subsurface contaminants by groundwater at the shipyard. The design of some remediation alternatives would be aided by knowledge of whether groundwater flowing at specific locations beneath the shipyard will eventually discharge directly to Sinclair Inlet of Puget Sound, or if it will discharge to the drainage system of one of the six dry docks located in the shipyard. A 1997 numerical (finite difference) groundwater-flow model of the shipyard and surrounding area was constructed to help evaluate the potential for groundwater discharge to Puget Sound. That steady-state, multilayer numerical model with homogeneous hydraulic characteristics indicated that groundwater flowing beneath nearly all of the shipyard discharges to the dry-dock drainage systems, and only shallow groundwater flowing beneath the western end of the shipyard discharges directly to Sinclair Inlet.Updated information from a 2016 regional groundwater-flow model constructed for the greater Kitsap Peninsula was used to update the 1997 groundwater model of the Puget Sound Naval Shipyard. That information included a new interpretation of the hydrogeologic units underlying the area, as well as improved recharge estimates. Other updates to the 1997 model included finer discretization of the finite-difference model grid into more layers, rows, and columns, all with reduced dimensions. This updated Puget Sound Naval Shipyard model was calibrated to 2001–2005 measured water levels, and hydraulic characteristics of the model layers representing different hydrogeologic units were estimated with the aid of state-of-the-art parameter optimization techniques.The flow directions and discharge locations predicted by this updated model generally match the 1997 model despite refinements and other changes. In the updated model, most groundwater discharge recharged within the boundaries of the shipyard is to the dry docks; only at the western end of the shipyard does groundwater discharge directly to Puget Sound. Particle tracking for the existing long-term monitoring well network suggests that only a few wells intercept groundwater that originates as recharge within the shipyard boundary.

  14. Time-partitioning simulation models for calculation on parallel computers

    NASA Technical Reports Server (NTRS)

    Milner, Edward J.; Blech, Richard A.; Chima, Rodrick V.

    1987-01-01

    A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer.

  15. A VAS-numerical model impact study using the Gal-Chen variational approach

    NASA Technical Reports Server (NTRS)

    Aune, Robert M.; Tuccillo, James J.; Uccellini, Louis W.; Petersen, Ralph A.

    1987-01-01

    A numerical study based on the use of a variational assimilation technique of Gal-Chen (1983, 1986) was conducted to assess the impact of incorporating temperature data from the VISSR Atmospheric Sounder (VAS) into a regional-scale numerical model. A comparison with the results of a control forecast using only conventional data indicated that the assimilation technique successfully combines actual VAS temperature observations with the dynamically balanced model fields without destabilizing the model during the assimilation cycle. Moreover, increasing the temporal frequency of VAS temperature insertions during the assimilation cycle was shown to enhance the impact on the model forecast through successively longer forecast periods. The incorporation of a nudging technique, whereby the model temperature field is constrained toward the VAS 'updated' values during the assimilation cycle, further enhances the impact of the VAS temperature data.

  16. Updated Eastern Interconnect Wind Power Output and Forecasts for ERGIS: July 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pennock, K.

    AWS Truepower, LLC (AWST) was retained by the National Renewable Energy Laboratory (NREL) to update wind resource, plant output, and wind power forecasts originally produced by the Eastern Wind Integration and Transmission Study (EWITS). The new data set was to incorporate AWST's updated 200-m wind speed map, additional tall towers that were not included in the original study, and new turbine power curves. Additionally, a primary objective of this new study was to employ new data synthesis techniques developed for the PJM Renewable Integration Study (PRIS) to eliminate diurnal discontinuities resulting from the assimilation of observations into mesoscale model runs.more » The updated data set covers the same geographic area, 10-minute time resolution, and 2004?2006 study period for the same onshore and offshore (Great Lakes and Atlantic coast) sites as the original EWITS data set.« less

  17. Update and review of accuracy assessment techniques for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.

    1983-01-01

    Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.

  18. Flight test derived heating math models for critical locations on the orbiter during reentry

    NASA Technical Reports Server (NTRS)

    Hertzler, E. K.; Phillips, P. W.

    1983-01-01

    An analysis technique was developed for expanding the aerothermodynamic envelope of the Space Shuttle without subjecting the vehicle to sustained flight at more stressing heating conditions. A transient analysis program was developed to take advantage of the transient maneuvers that were flown as part of this analysis technique. Heat rates were derived from flight test data for various locations on the orbiter. The flight derived heat rates were used to update heating models based on predicted data. Future missions were then analyzed based on these flight adjusted models. A technique for comparing flight and predicted heating rate data and the extrapolation of the data to predict the aerothermodynamic environment of future missions is presented.

  19. Nonlinear Pressurization and Modal Analysis Procedure for Dynamic Modeling of Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    An introduction and set of guidelines for finite element dynamic modeling of nonrigidized inflatable structures is provided. A two-step approach is presented, involving 1) nonlinear static pressurization of the structure and updating of the stiffness matrix and 2) hear normal modes analysis using the updated stiffness. Advantages of this approach are that it provides physical realism in modeling of pressure stiffening, and it maintains the analytical convenience of a standard bear eigensolution once the stiffness has been modified. Demonstration of the approach is accomplished through the creation and test verification of an inflated cylinder model using a large commercial finite element code. Good frequency and mode shape comparisons are obtained with test data and previous modeling efforts, verifying the accuracy of the technique. Problems encountered in the application of the approach, as well as their solutions, are discussed in detail.

  20. Distributed Monitoring of the R(sup 2) Statistic for Linear Regression

    NASA Technical Reports Server (NTRS)

    Bhaduri, Kanishka; Das, Kamalika; Giannella, Chris R.

    2011-01-01

    The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and one or more dependent target variables. This problem becomes challenging for large scale data in a distributed computing environment when only a subset of instances is available at individual nodes and the local data changes frequently. Data centralization and periodic model recomputation can add high overhead to tasks like anomaly detection in such dynamic settings. Therefore, the goal is to develop techniques for monitoring and updating the model over the union of all nodes data in a communication-efficient fashion. Correctness guarantees on such techniques are also often highly desirable, especially in safety-critical application scenarios. In this paper we develop DReMo a distributed algorithm with very low resource overhead, for monitoring the quality of a regression model in terms of its coefficient of determination (R2 statistic). When the nodes collectively determine that R2 has dropped below a fixed threshold, the linear regression model is recomputed via a network-wide convergecast and the updated model is broadcast back to all nodes. We show empirically, using both synthetic and real data, that our proposed method is highly communication-efficient and scalable, and also provide theoretical guarantees on correctness.

  1. Application of Phase-Field Techniques to Hydraulically- and Deformation-Induced Fracture.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Culp, David; Miller, Nathan; Schweizer, Laura

    Phase-field techniques provide an alternative approach to fracture problems which mitigate some of the computational expense associated with tracking the crack interface and the coalescence of individual fractures. The technique is extended to apply to hydraulically driven fracture such as would occur during fracking or CO 2 sequestration. Additionally, the technique is applied to a stainless steel specimen used in the Sandia Fracture Challenge. It was found that the phase-field model performs very well, at least qualitatively, in both deformation-induced fracture and hydraulically-induced fracture, though spurious hourglassing modes were observed during coupled hydralically-induced fracture. Future work would include performing additionalmore » quantitative benchmark tests and updating the model as needed.« less

  2. Spatiotemporal Access Model Based on Reputation for the Sensing Layer of the IoT

    PubMed Central

    Guo, Yunchuan; Yin, Lihua; Li, Chao

    2014-01-01

    Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model. PMID:25177731

  3. Foundations for a multiscale collaborative Earth model

    NASA Astrophysics Data System (ADS)

    Afanasiev, Michael; Peter, Daniel; Sager, Korbinian; Simutė, Saulė; Ermert, Laura; Krischer, Lion; Fichtner, Andreas

    2016-01-01

    We present a computational framework for the assimilation of local to global seismic data into a consistent model describing Earth structure on all seismically accessible scales. This Collaborative Seismic Earth Model (CSEM) is designed to meet the following requirements: (i) Flexible geometric parametrization, capable of capturing topography and bathymetry, as well as all aspects of potentially resolvable structure, including small-scale heterogeneities and deformations of internal discontinuities. (ii) Independence of any particular wave equation solver, in order to enable the combination of inversion techniques suitable for different types of seismic data. (iii) Physical parametrization that allows for full anisotropy and for variations in attenuation and density. While not all of these parameters are always resolvable, the assimilation of data that constrain any parameter subset should be possible. (iv) Ability to accommodate successive refinements through the incorporation of updates on any scale as new data or inversion techniques become available. (v) Enable collaborative Earth model construction. The structure of the initial CSEM is represented on a variable-resolution tetrahedral mesh. It is assembled from a long-wavelength 3-D global model into which several regional-scale tomographies are embedded. We illustrate the CSEM workflow of successive updating with two examples from Japan and the Western Mediterranean, where we constrain smaller scale structure using full-waveform inversion. Furthermore, we demonstrate the ability of the CSEM to act as a vehicle for the combination of different tomographic techniques with a joint full-waveform and traveltime ray tomography of Europe. This combination broadens the exploitable frequency range of the individual techniques, thereby improving resolution. We perform two iterations of a whole-Earth full-waveform inversion using a long-period reference data set from 225 globally recorded earthquakes. At this early stage of the CSEM development, the broad global updates mostly act to remove artefacts from the assembly of the initial CSEM. During the future evolution of the CSEM, the reference data set will be used to account for the influence of small-scale refinements on large-scale global structure. The CSEM as a computational framework is intended to help bridging the gap between local, regional and global tomography, and to contribute to the development of a global multiscale Earth model. While the current construction serves as a first proof of concept, future refinements and additions will require community involvement, which is welcome at this stage already.

  4. Model correlation and damage location for large space truss structures: Secant method development and evaluation

    NASA Technical Reports Server (NTRS)

    Smith, Suzanne Weaver; Beattie, Christopher A.

    1991-01-01

    On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.

  5. The National Health Educator Job Analysis 2010: Process and Outcomes

    ERIC Educational Resources Information Center

    Doyle, Eva I.; Caro, Carla M.; Lysoby, Linda; Auld, M. Elaine; Smith, Becky J.; Muenzen, Patricia M.

    2012-01-01

    The National Health Educator Job Analysis 2010 was conducted to update the competencies model for entry- and advanced-level health educators. Qualitative and quantitative methods were used. Structured interviews, focus groups, and a modified Delphi technique were implemented to engage 59 health educators from diverse work settings and experience…

  6. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  7. A Hybrid Model for Multiscale Laser Plasma Simulations with Detailed Collisional Physics

    DTIC Science & Technology

    2017-06-15

    Validation against experimental data •Nonequilibrium radiation transport: coupling with a collisional-radiative model •Inelastic collisions in a MF...for Public Release; Distribution is Unlimited. PA# 17383 Collisional Radiative (CR) Overview Updates • Investigated Quasi -Steady-State • Investigated...Techniques Quasi Stead-State (QSS) • Assumes fast kinetics between states within an ion distribution • Assumes longer diffusion/decay times than

  8. The B-dot Earth Average Magnetic Field

    NASA Technical Reports Server (NTRS)

    Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon

    2013-01-01

    The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.

  9. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  10. LITHO1.0: An Updated Crust and Lithosphere Model of the Earth

    DTIC Science & Technology

    2010-09-01

    wc arc uncertain what causes the remainder of the discrepancy. The measurement discrepancies are much smaller than the signal in the data, and the...short-period group velocity data measured with a new technique which are sensitive to lid properties as well as crustal thickness and average...most progress was made on surface-wave measurements . We use a cluster analysis technique to measure surface-wave group velocity from lOmHz to 40mHz

  11. Continuous data assimilation experiments with the NMC eta model: A GALE IOP 1 case study. [NMC (National Meteorological Center); GALE IOP (Genesis of Atlantic Lows Experiment intensive observing period)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramamurthy, M.K.; Xu, T.Y.

    1993-11-01

    The current major expansion in observational capability of the National Weather Service is principally in the volume of asynchronous data rather than synchronous observations at the standard synoptic times. Hence, the National Meteorological Center is considering a continuous data assimilation system to replace at some time the intermittent system now used by its regional and global operational models. We describe this system, based on the Newtonian relaxation technique, as developed for the eta model. Experiments are performed for the first intensive observing period of the Genesis of Atlantic Lows Experiment (GALE) in January 1986, when strong upper-level cyclogenesis occurred, withmore » a pronounced tropopause fold but only modest surface development. The GALE level IIIb dataset was used for initializing and updating the model. Issues addressed in the experiments include choice of update variable, number, and length of update segments; need for updating moisture and surface pressure information; nudging along boundaries; and noise control. Assimilation of data from a single level was also studied. Use of a preforecast assimilation cycle was found to eliminate the spinup problem almost entirely. Multiple, shorter assimilation segments produced better forecasts than a single, longer cycle. Updating the mass field was less effective than nudging the wind field but assimilating both was best. Assimilation of moisture data, surprisingly, affected the spinup adversely, but nudging the surface pressure information reduced the spurious pillow effect. Assimilation of single-level information was ineffective unless accompanied by increased vertical coupling, obtained from a control integration. 52 refs., 19 figs., 1 tab.« less

  12. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one to verify sea breeze forecasts, and three were capable of verifying several phenomena. The AMU also determined the feasibility of transitioning each technique into operations and rated the operational capability of each technique on a subjective 1-10 scale: (1) 1 indicates that the technique is only in the initial stages of development, (2) 2-5 indicates that the technique is still undergoing modifications and is not ready for operations, (3) 6-8 indicates a higher probability of integrating the technique into AWIPS with code modifications, and (4) 9-10 indicates that the technique was created for AWIPS and is ready for implementation. Eight of the techniques were assigned a rating of 5 or below. The other two received ratings of 6 and 7, and none of the techniques a rating of 9-10. At the current time, there are no phenomenological model verification techniques ready for operational use. However, several of the techniques described in this report may become viable techniques in the future and should be monitored for updates in the literature. The desire to use a phenomenological verification technique is widespread in the modeling community, and it is likely that other techniques besides those described herein are being developed, but the work has not yet been published. Therefore, the AMIU recommends that the literature continue to be monitored for updates to the techniques described in this report and for new techniques being developed whose results have not yet been published. 111

  13. Ku-band system design study and TDRSS interface analysis

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Mckenzie, T. M.; Choi, H. J.; Tsang, C. S.; An, S. H.

    1983-01-01

    The capabilities of the Shuttle/TDRSS link simulation program (LinCsim) were expanded to account for radio frequency interference (RFI) effects on the Shuttle S-band links, the channel models were updated to reflect the RFI related hardware changes, the ESTL hardware modeling of the TDRS communication payload was reviewed and evaluated, in LinCsim the Shuttle/TDRSS signal acquisition was modeled, LinCsim was upgraded, and possible Shuttle on-orbit navigation techniques was evaluated.

  14. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  15. Tightly coupled low cost 3D RISS/GPS integration using a mixture particle filter for vehicular navigation.

    PubMed

    Georgy, Jacques; Noureldin, Aboelmagd

    2011-01-01

    Satellite navigation systems such as the global positioning system (GPS) are currently the most common technique used for land vehicle positioning. However, in GPS-denied environments, there is an interruption in the positioning information. Low-cost micro-electro mechanical system (MEMS)-based inertial sensors can be integrated with GPS and enhance the performance in denied GPS environments. The traditional technique for this integration problem is Kalman filtering (KF). Due to the inherent errors of low-cost MEMS inertial sensors and their large stochastic drifts, KF, with its linearized models, has limited capabilities in providing accurate positioning. Particle filtering (PF) was recently suggested as a nonlinear filtering technique to accommodate for arbitrary inertial sensor characteristics, motion dynamics and noise distributions. An enhanced version of PF called the Mixture PF is utilized in this study to perform tightly coupled integration of a three dimensional (3D) reduced inertial sensors system (RISS) with GPS. In this work, the RISS consists of one single-axis gyroscope and a two-axis accelerometer used together with the vehicle's odometer to obtain 3D navigation states. These sensors are then integrated with GPS in a tightly coupled scheme. In loosely-coupled integration, at least four satellites are needed to provide acceptable GPS position and velocity updates for the integration filter. The advantage of the tightly-coupled integration is that it can provide GPS measurement update(s) even when the number of visible satellites is three or lower, thereby improving the operation of the navigation system in environments with partial blockages by providing continuous aiding to the inertial sensors even during limited GPS satellite availability. To effectively exploit the capabilities of PF, advanced modeling for the stochastic drift of the vertically aligned gyroscope is used. In order to benefit from measurement updates for such drift, which are loosely-coupled updates, a hybrid loosely/tightly coupled solution is proposed. This solution is suitable for downtown environments because of the long natural outages or degradation of GPS. The performance of the proposed 3D Navigation solution using Mixture PF for 3D RISS/GPS integration is examined by road test trajectories in a land vehicle and compared to the KF counterpart.

  16. Tightly Coupled Low Cost 3D RISS/GPS Integration Using a Mixture Particle Filter for Vehicular Navigation

    PubMed Central

    Georgy, Jacques; Noureldin, Aboelmagd

    2011-01-01

    Satellite navigation systems such as the global positioning system (GPS) are currently the most common technique used for land vehicle positioning. However, in GPS-denied environments, there is an interruption in the positioning information. Low-cost micro-electro mechanical system (MEMS)-based inertial sensors can be integrated with GPS and enhance the performance in denied GPS environments. The traditional technique for this integration problem is Kalman filtering (KF). Due to the inherent errors of low-cost MEMS inertial sensors and their large stochastic drifts, KF, with its linearized models, has limited capabilities in providing accurate positioning. Particle filtering (PF) was recently suggested as a nonlinear filtering technique to accommodate for arbitrary inertial sensor characteristics, motion dynamics and noise distributions. An enhanced version of PF called the Mixture PF is utilized in this study to perform tightly coupled integration of a three dimensional (3D) reduced inertial sensors system (RISS) with GPS. In this work, the RISS consists of one single-axis gyroscope and a two-axis accelerometer used together with the vehicle’s odometer to obtain 3D navigation states. These sensors are then integrated with GPS in a tightly coupled scheme. In loosely-coupled integration, at least four satellites are needed to provide acceptable GPS position and velocity updates for the integration filter. The advantage of the tightly-coupled integration is that it can provide GPS measurement update(s) even when the number of visible satellites is three or lower, thereby improving the operation of the navigation system in environments with partial blockages by providing continuous aiding to the inertial sensors even during limited GPS satellite availability. To effectively exploit the capabilities of PF, advanced modeling for the stochastic drift of the vertically aligned gyroscope is used. In order to benefit from measurement updates for such drift, which are loosely-coupled updates, a hybrid loosely/tightly coupled solution is proposed. This solution is suitable for downtown environments because of the long natural outages or degradation of GPS. The performance of the proposed 3D Navigation solution using Mixture PF for 3D RISS/GPS integration is examined by road test trajectories in a land vehicle and compared to the KF counterpart. PMID:22163846

  17. Novel conformal technique to reduce staircasing artifacts at material boundaries for FDTD modeling of the bioheat equation.

    PubMed

    Neufeld, E; Chavannes, N; Samaras, T; Kuster, N

    2007-08-07

    The modeling of thermal effects, often based on the Pennes Bioheat Equation, is becoming increasingly popular. The FDTD technique commonly used in this context suffers considerably from staircasing errors at boundaries. A new conformal technique is proposed that can easily be integrated into existing implementations without requiring a special update scheme. It scales fluxes at interfaces with factors derived from the local surface normal. The new scheme is validated using an analytical solution, and an error analysis is performed to understand its behavior. The new scheme behaves considerably better than the standard scheme. Furthermore, in contrast to the standard scheme, it is possible to obtain with it more accurate solutions by increasing the grid resolution.

  18. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  19. Modelling and Prediction of Spark-ignition Engine Power Performance Using Incremental Least Squares Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke

    2010-05-01

    Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.

  20. A comparison between EDA-EnVar and ETKF-EnVar data assimilation techniques using radar observations at convective scales through a case study of Hurricane Ike (2008)

    NASA Astrophysics Data System (ADS)

    Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong

    2017-07-01

    This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.

  1. Graphical user interface for intraoperative neuroimage updating

    NASA Astrophysics Data System (ADS)

    Rick, Kyle R.; Hartov, Alex; Roberts, David W.; Lunn, Karen E.; Sun, Hai; Paulsen, Keith D.

    2003-05-01

    Image-guided neurosurgery typically relies on preoperative imaging information that is subject to errors resulting from brain shift and deformation in the OR. A graphical user interface (GUI) has been developed to facilitate the flow of data from OR to image volume in order to provide the neurosurgeon with updated views concurrent with surgery. Upon acquisition of registration data for patient position in the OR (using fiducial markers), the Matlab GUI displays ultrasound image overlays on patient specific, preoperative MR images. Registration matrices are also applied to patient-specific anatomical models used for image updating. After displaying the re-oriented brain model in OR coordinates and digitizing the edge of the craniotomy, gravitational sagging of the brain is simulated using the finite element method. Based on this model, interpolation to the resolution of the preoperative images is performed and re-displayed to the surgeon during the procedure. These steps were completed within reasonable time limits and the interface was relatively easy to use after a brief training period. The techniques described have been developed and used retrospectively prior to this study. Based on the work described here, these steps can now be accomplished in the operating room and provide near real-time feedback to the surgeon.

  2. Update in Infectious Diseases 2017.

    PubMed

    Candel, F J; Peñuelas, M; Lejárraga, C; Emilov, T; Rico, C; Díaz, I; Lázaro, C; Viñuela-Prieto, J M; Matesanz, M

    2017-09-01

    Antimicrobial resistance in complex models of continuous infection is a current issue. The update 2017 course addresses about microbiological, epidemiological and clinical aspects useful for a current approach to infectious disease. During the last year, nosocomial pneumonia approach guides, recommendations for management of yeast and filamentous fungal infections, review papers on the empirical approach to peritonitis and extensive guidelines on stewardship have been published. HIV infection is being treated before and more intensively. The implementation of molecular biology, spectrometry and inmunology to traditional techniques of staining and culture achieve a better and faster microbiological diagnosis. Finally, the infection is increasingly integrated, assessing non-antibiotic aspects in the treatment.

  3. Fast secant methods for the iterative solution of large nonsymmetric linear systems

    NASA Technical Reports Server (NTRS)

    Deuflhard, Peter; Freund, Roland; Walter, Artur

    1990-01-01

    A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.

  4. LITHO1.0: An Updated Crust and Lithosphere Model of the Earth

    NASA Astrophysics Data System (ADS)

    Masters, G.; Ma, Z.; Laske, G.; Pasyanos, M. E.

    2011-12-01

    We are developing LITHO1.0: an updated crust and lithosphere model of the Earth. The overall plan is to take the popular CRUST2.0 model - a global model of crustal structure with a relatively poor representation of the uppermost mantle - and improve its nominal resolution to 1 degree and extend the model to include lithospheric structure. The new model, LITHO1.0, will be constrained by many different datasets including extremely large new datasets of relatively short period group velocity data. Other data sets include (but are not limited to) compilations of receiver function constraints and active source studies. To date, we have completed the compilation of extremely large global datasets of group velocity for Rayleigh and Love waves from 10mHz to 40mHz using a cluster analysis technique. We have also extended the method to measure phase velocity and are complementing the group velocity with global data sets of longer period phase data that help to constrain deep lithosphere properties. To model these data, we require a starting model for the crust at a nominal resolution of 1 degree. This has been developed by constructing a map of crustal thickness using data from receiver function and active source experiments where available, and by using CRUST2.0 where other constraints are not available. Particular care has been taken to make sure that the locations of sharp changes in crustal thickness are accurately represented. This map is then used as a template to extend CRUST2.0 to 1 degree nominal resolution and to develop starting maps of all crustal properties. We are currently modeling the data using two techniques. The first is a linearized inversion about the 3D crustal starting model. Note that it is important to use local eigenfunctions to compute Frechet derivatives due to the extreme variations in crustal structure. Another technique uses a targeted grid search method. A preliminary model for the crustal part of the model will be presented.

  5. 10 Steps in Writing the Research Paper. Fourth Edition.

    ERIC Educational Resources Information Center

    Markman, Roberta H.; And Others

    Retaining the compact format of earlier editions, this updated book presents techniques and models for high school and college students to write successful research papers. After a preface and an introduction to research, the book discusses the 10 steps in writing a research paper: (1) find a subject; (2) read a general article; (3) formulate a…

  6. Validation of New Wind Resource Maps

    NASA Astrophysics Data System (ADS)

    Elliott, D.; Schwartz, M.

    2002-05-01

    The National Renewable Energy Laboratory (NREL) recently led a project to validate updated state wind resource maps for the northwestern United States produced by a private U.S. company, TrueWind Solutions (TWS). The independent validation project was a cooperative activity among NREL, TWS, and meteorological consultants. The independent validation concept originated at a May 2001 technical workshop held at NREL to discuss updating the Wind Energy Resource Atlas of the United States. Part of the workshop, which included more than 20 attendees from the wind resource mapping and consulting community, was dedicated to reviewing the latest techniques for wind resource assessment. It became clear that using a numerical modeling approach for wind resource mapping was rapidly gaining ground as a preferred technique and if the trend continues, it will soon become the most widely-used technique around the world. The numerical modeling approach is a relatively fast application compared to older mapping methods and, in theory, should be quite accurate because it directly estimates the magnitude of boundary-layer processes that affect the wind resource of a particular location. Numerical modeling output combined with high resolution terrain data can produce useful wind resource information at a resolution of 1 km or lower. However, because the use of the numerical modeling approach is new (last 35 years) and relatively unproven, meteorological consultants question the accuracy of the approach. It was clear that new state or regional wind maps produced by this method would have to undergo independent validation before the results would be accepted by the wind energy community and developers.

  7. A VAS-numerical model impact study using the Gal-Chen variational approach. [Visible Infrared Spin-Scan Radiometer Atmospheric Sounder (VAS)

    NASA Technical Reports Server (NTRS)

    Aune, Robert M.; Uccellini, Louis W.; Peterson, Ralph A.; Tuccillo, James J.

    1987-01-01

    Numerical experiments to assess the impact of incorporating temperature data from the VISSR Atmospheric Sounder (VAS) using the assimilation technique developed by Gal-Chen (1986) modified for use in the Mesoscale Atmospheric Simulation System (MASS) model were conducted. The scheme is designed to utilize the high temporal and horizontal resolution of satellite retrievals while maintaining the fine vertical structure generated by the model. This is accomplished by adjusting the model lapse rates to reflect thicknesses retrieved from VAS and applying a three-dimensional variational that preserves the distribution of the geopotential fields in the model. A nudging technique whereby the model temperature fields are gradually adjusted toward the updated temperature fields during model integration is also tested. An adiabatic version of MASS is used in all experiments to better isolate mass-momentum imbalances. The method has a sustained impact over an 18 hr model simulation.

  8. Nonlinear and Digital Man-machine Control Systems Modeling

    NASA Technical Reports Server (NTRS)

    Mekel, R.

    1972-01-01

    An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.

  9. Multi-level damage identification with response reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Chao-Dong; Xu, You-Lin

    2017-10-01

    Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.

  10. Topobathymetric elevation model development using a new methodology: Coastal National Elevation Database

    USGS Publications Warehouse

    Danielson, Jeffrey J.; Poppenga, Sandra K.; Brock, John C.; Evans, Gayla A.; Tyler, Dean; Gesch, Dean B.; Thatcher, Cindy A.; Barras, John

    2016-01-01

    During the coming decades, coastlines will respond to widely predicted sea-level rise, storm surge, and coastalinundation flooding from disastrous events. Because physical processes in coastal environments are controlled by the geomorphology of over-the-land topography and underwater bathymetry, many applications of geospatial data in coastal environments require detailed knowledge of the near-shore topography and bathymetry. In this paper, an updated methodology used by the U.S. Geological Survey Coastal National Elevation Database (CoNED) Applications Project is presented for developing coastal topobathymetric elevation models (TBDEMs) from multiple topographic data sources with adjacent intertidal topobathymetric and offshore bathymetric sources to generate seamlessly integrated TBDEMs. This repeatable, updatable, and logically consistent methodology assimilates topographic data (land elevation) and bathymetry (water depth) into a seamless coastal elevation model. Within the overarching framework, vertical datum transformations are standardized in a workflow that interweaves spatially consistent interpolation (gridding) techniques with a land/water boundary mask delineation approach. Output gridded raster TBDEMs are stacked into a file storage system of mosaic datasets within an Esri ArcGIS geodatabase for efficient updating while maintaining current and updated spatially referenced metadata. Topobathymetric data provide a required seamless elevation product for several science application studies, such as shoreline delineation, coastal inundation mapping, sediment-transport, sea-level rise, storm surge models, and tsunami impact assessment. These detailed coastal elevation data are critical to depict regions prone to climate change impacts and are essential to planners and managers responsible for mitigating the associated risks and costs to both human communities and ecosystems. The CoNED methodology approach has been used to construct integrated TBDEM models in Mobile Bay, the northern Gulf of Mexico, San Francisco Bay, the Hurricane Sandy region, and southern California.

  11. A Particle Smoother with Sequential Importance Resampling for soil hydraulic parameter estimation: A lysimeter experiment

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Hendricks Franssen, Harrie-Jan; Moradkhani, Hamid; Pütz, Thomas; Han, Xujun; Vereecken, Harry

    2013-04-01

    An adequate description of soil hydraulic properties is essential for a good performance of hydrological forecasts. So far, several studies showed that data assimilation could reduce the parameter uncertainty by considering soil moisture observations. However, these observations and also the model forcings were recorded with a specific measurement error. It seems a logical step to base state updating and parameter estimation on observations made at multiple time steps, in order to reduce the influence of outliers at single time steps given measurement errors and unknown model forcings. Such outliers could result in erroneous state estimation as well as inadequate parameters. This has been one of the reasons to use a smoothing technique as implemented for Bayesian data assimilation methods such as the Ensemble Kalman Filter (i.e. Ensemble Kalman Smoother). Recently, an ensemble-based smoother has been developed for state update with a SIR particle filter. However, this method has not been used for dual state-parameter estimation. In this contribution we present a Particle Smoother with sequentially smoothing of particle weights for state and parameter resampling within a time window as opposed to the single time step data assimilation used in filtering techniques. This can be seen as an intermediate variant between a parameter estimation technique using global optimization with estimation of single parameter sets valid for the whole period, and sequential Monte Carlo techniques with estimation of parameter sets evolving from one time step to another. The aims are i) to improve the forecast of evaporation and groundwater recharge by estimating hydraulic parameters, and ii) to reduce the impact of single erroneous model inputs/observations by a smoothing method. In order to validate the performance of the proposed method in a real world application, the experiment is conducted in a lysimeter environment.

  12. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  13. Aeroservoelastic Uncertainty Model Identification from Flight Data

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    2001-01-01

    Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.

  14. Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Parisi, G.; Parisi, L.

    2011-06-01

    We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.

  15. Recent Updates of A Multi-Phase Transport (AMPT) Model

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei

    2008-10-01

    We will present recent updates to the AMPT model, a Monte Carlo transport model for high energy heavy ion collisions, since its first public release in 2004 and the corresponding detailed descriptions in Phys. Rev. C 72, 064901 (2005). The updates often result from user requests. Some of these updates expand the physics processes or descriptions in the model, while some updates improve the usability of the model such as providing the initial parton distributions or help avoid crashes on some operating systems. We will also explain how the AMPT model is being maintained and updated.

  16. An oblate ellipsoidal approach to update a high-resolution geopotential model over the oceans: Study case of EGM2008 and DTU10

    NASA Astrophysics Data System (ADS)

    Sebera, Josef; Bezděk, Aleš; Kostelecký, Jan; Pešek, Ivan; Shum, C. K.

    2016-01-01

    The most important high-resolution geopotential models such as EGM96 and EGM2008 have been released approximately once per decade. In light of the ability of modern satellite, airborne or terrestrial techniques to provide new data sets every year (e.g., in polar and ocean areas), these data can be readily included in existing models without waiting for a new release. In this article, we present a novel ellipsoidal approach for updating high-resolution models over the oceans with new gridded data. The problem is demonstrated using the EGM2008 model updated with DTU10 geoid and gravity grids that provide additional signal over the Arctic oceans. The result of the procedure are the ellipsoidal and the spherical harmonic coefficients up to degree 4320 and 4400, respectively. These coefficients represent the input data set to within 0.08 mGal globally, with the largest differences located at the land-ocean boundaries, which is two orders of magnitude less than real accuracy of gravity data from satellite altimetry. Along with the harmonic coefficients a detailed map of the second vertical derivative of the anomalous potential (or vertical gravitational gradient) on 1 arc-min grid is anticipated to improve or complement the original DTU10 geoid model. Finally, an optimized set of Jekeli's functions is provided as they allow for computing oblate ellipsoidal harmonics up to a very high degree and order (>10,000) in terms of the hypergeometric formulation.

  17. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  18. Local mesh adaptation technique for front tracking problems

    NASA Astrophysics Data System (ADS)

    Lock, N.; Jaeger, M.; Medale, M.; Occelli, R.

    1998-09-01

    A numerical model is developed for the simulation of moving interfaces in viscous incompressible flows. The model is based on the finite element method with a pseudo-concentration technique to track the front. Since a Eulerian approach is chosen, the interface is advected by the flow through a fixed mesh. Therefore, material discontinuity across the interface cannot be described accurately. To remedy this problem, the model has been supplemented with a local mesh adaptation technique. This latter consists in updating the mesh at each time step to the interface position, such that element boundaries lie along the front. It has been implemented for unstructured triangular finite element meshes. The outcome of this technique is that it allows an accurate treatment of material discontinuity across the interface and, if necessary, a modelling of interface phenomena such as surface tension by using specific boundary elements. For illustration, two examples are computed and presented in this paper: the broken dam problem and the Rayleigh-Taylor instability. Good agreement has been obtained in the comparison of the numerical results with theory or available experimental data.

  19. An architecture for designing fuzzy logic controllers using neural networks

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    Described here is an architecture for designing fuzzy controllers through a hierarchical process of control rule acquisition and by using special classes of neural network learning techniques. A new method for learning to refine a fuzzy logic controller is introduced. A reinforcement learning technique is used in conjunction with a multi-layer neural network model of a fuzzy controller. The model learns by updating its prediction of the plant's behavior and is related to the Sutton's Temporal Difference (TD) method. The method proposed here has the advantage of using the control knowledge of an experienced operator and fine-tuning it through the process of learning. The approach is applied to a cart-pole balancing system.

  20. Ground Motion Prediction Model Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Dhanya, J.; Raghukanth, S. T. G.

    2018-03-01

    This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.

  1. Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning

    NASA Astrophysics Data System (ADS)

    Zuberi, M. AH; Pratt, R. G.

    2018-04-01

    The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.

  2. Automatic background updating for video-based vehicle detection

    NASA Astrophysics Data System (ADS)

    Hu, Chunhai; Li, Dongmei; Liu, Jichuan

    2008-03-01

    Video-based vehicle detection is one of the most valuable techniques for the Intelligent Transportation System (ITS). The widely used video-based vehicle detection technique is the background subtraction method. The key problem of this method is how to subtract and update the background effectively. In this paper an efficient background updating scheme based on Zone-Distribution for vehicle detection is proposed to resolve the problems caused by sudden camera perturbation, sudden or gradual illumination change and the sleeping person problem. The proposed scheme is robust and fast enough to satisfy the real-time constraints of vehicle detection.

  3. Thyroid Radiofrequency Ablation: Updates on Innovative Devices and Techniques

    PubMed Central

    Park, Hye Sun; Park, Auh Whan; Chung, Sae Rom; Choi, Young Jun; Lee, Jeong Hyun

    2017-01-01

    Radiofrequency ablation (RFA) is a well-known, effective, and safe method for treating benign thyroid nodules and recurrent thyroid cancers. Thyroid-dedicated devices and basic techniques for thyroid RFA were introduced by the Korean Society of Thyroid Radiology (KSThR) in 2012. Thyroid RFA has now been adopted worldwide, with subsequent advances in devices and techniques. To optimize the treatment efficacy and patient safety, understanding the basic and advanced RFA techniques and selecting the optimal treatment strategy are critical. The goal of this review is to therefore provide updates and analysis of current devices and advanced techniques for RFA treatment of benign thyroid nodules and recurrent thyroid cancers. PMID:28670156

  4. MODEST: A Tool for Geodesy and Astronomy

    NASA Technical Reports Server (NTRS)

    Sovers, Ojars J.; Jacobs, Christopher S.; Lanyi, Gabor E.

    2004-01-01

    Features of the JPL VLBI modeling and estimation software "MODEST" are reviewed. Its main advantages include thoroughly documented model physics, portability, and detailed error modeling. Two unique models are included: modeling of source structure and modeling of both spatial and temporal correlations in tropospheric delay noise. History of the code parallels the development of the astrometric and geodetic VLBI technique and the software retains many of the models implemented during its advancement. The code has been traceably maintained since the early 1980s, and will continue to be updated with recent IERS standards. Scripts are being developed to facilitate user-friendly data processing in the era of e-VLBI.

  5. Improving Simulated Soil Moisture Fields Through Assimilation of AMSR-E Soil Moisture Retrievals with an Ensemble Kalman Filter and a Mass Conservation Constraint

    NASA Technical Reports Server (NTRS)

    Li, Bailing; Toll, David; Zhan, Xiwu; Cosgrove, Brian

    2011-01-01

    Model simulated soil moisture fields are often biased due to errors in input parameters and deficiencies in model physics. Satellite derived soil moisture estimates, if retrieved appropriately, represent the spatial mean of soil moisture in a footprint area, and can be used to reduce model bias (at locations near the surface) through data assimilation techniques. While assimilating the retrievals can reduce model bias, it can also destroy the mass balance enforced by the model governing equation because water is removed from or added to the soil by the assimilation algorithm. In addition, studies have shown that assimilation of surface observations can adversely impact soil moisture estimates in the lower soil layers due to imperfect model physics, even though the bias near the surface is decreased. In this study, an ensemble Kalman filter (EnKF) with a mass conservation updating scheme was developed to assimilate the actual value of Advanced Microwave Scanning Radiometer (AMSR-E) soil moisture retrievals to improve the mean of simulated soil moisture fields by the Noah land surface model. Assimilation results using the conventional and the mass conservation updating scheme in the Little Washita watershed of Oklahoma showed that, while both updating schemes reduced the bias in the shallow root zone, the mass conservation scheme provided better estimates in the deeper profile. The mass conservation scheme also yielded physically consistent estimates of fluxes and maintained the water budget. Impacts of model physics on the assimilation results are discussed.

  6. Updates in biological therapies for knee injuries: full thickness cartilage defect.

    PubMed

    Nicolini, Alexandre Pedro; Carvalho, Rogerio Teixeira; Dragone, Bruno; Lenza, Mario; Cohen, Moises; Ferretti, Mario

    2014-09-01

    Full thickness cartilage defect might occur at different ages, but a focal defect is a major concern in the knee of young athletes. It causes impairment and does not heal by itself. Several techniques were described to treat symptomatic full thickness cartilage defect. Recently, several advances were described on the known techniques of microfracture, osteochondral allograft, cell therapy, and others. This article brings an update of current literature on these well-described techniques for full thickness cartilage defect.

  7. Situation Model Updating in Young and Older Adults: Global versus Incremental Mechanisms

    PubMed Central

    Bailey, Heather R.; Zacks, Jeffrey M.

    2015-01-01

    Readers construct mental models of situations described by text. Activity in narrative text is dynamic, so readers must frequently update their situation models when dimensions of the situation change. Updating can be incremental, such that a change leads to updating just the dimension that changed, or global, such that the entire model is updated. Here, we asked whether older and young adults make differential use of incremental and global updating. Participants read narratives containing changes in characters and spatial location and responded to recognition probes throughout the texts. Responses were slower when probes followed a change, suggesting that situation models were updated at changes. When either dimension changed, responses to probes for both dimensions were slowed; this provides evidence for global updating. Moreover, older adults showed stronger evidence of global updating than did young adults. One possibility is that older adults perform more global updating to offset reduced ability to manipulate information in working memory. PMID:25938248

  8. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  9. Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix

    NASA Astrophysics Data System (ADS)

    Hagen, V. S.; Arntsen, B.; Raknes, E. B.

    2017-12-01

    Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.

  10. On the difficulty to optimally implement the Ensemble Kalman filter: An experiment based on many hydrological models and catchments

    NASA Astrophysics Data System (ADS)

    Thiboult, A.; Anctil, F.

    2015-10-01

    Forecast reliability and accuracy is a prerequisite for successful hydrological applications. This aim may be attained by using data assimilation techniques such as the popular Ensemble Kalman filter (EnKF). Despite its recognized capacity to enhance forecasting by creating a new set of initial conditions, implementation tests have been mostly carried out with a single model and few catchments leading to case specific conclusions. This paper performs an extensive testing to assess ensemble bias and reliability on 20 conceptual lumped models and 38 catchments in the Province of Québec with perfect meteorological forecast forcing. The study confirms that EnKF is a powerful tool for short range forecasting but also that it requires a more subtle setting than it is frequently recommended. The success of the updating procedure depends to a great extent on the specification of the hyper-parameters. In the implementation of the EnKF, the identification of the hyper-parameters is very unintuitive if the model error is not explicitly accounted for and best estimates of forcing and observation error lead to overconfident forecasts. It is shown that performance are also related to the choice of updated state variables and that all states variables should not systematically be updated. Additionally, the improvement over the open loop scheme depends on the watershed and hydrological model structure, as some models exhibit a poor compatibility with EnKF updating. Thus, it is not possible to conclude in detail on a single ideal manner to identify an optimal implementation; conclusions drawn from a unique event, catchment, or model are likely to be misleading since transferring hyper-parameters from a case to another may be hazardous. Finally, achieving reliability and bias jointly is a daunting challenge as the optimization of one score is done at the cost of the other.

  11. Assimilation of NUCAPS Retrieved Profiles in GSI for Unique Forecasting Applications

    NASA Technical Reports Server (NTRS)

    Berndt, Emily Beth; Zavodsky, Bradley; Srikishen, Jayanthi; Blankenship, Clay

    2015-01-01

    Hyperspectral IR profiles can be assimilated in GSI as a separate observation other than radiosondes with only changes to tables in the fix directory. Assimilation of profiles does produce changes to analysis fields and evidenced by: Innovations larger than +/-2.0 K are present and represent where individual profiles impact the final temperature analysis.The updated temperature analysis is colder behind the cold front and warmer in the warm sector. The updated moisture analysis is modified more in the low levels and tends to be drier than the original model background Analysis of model output shows: Differences relative to 13-km RAP analyses are smaller when profiles are assimilated with NUCAPS errors. CAPE is under-forecasted when assimilating NUCAPS profiles, which could be problematic for severe weather forecasting Refining the assimilation technique to incorporate an error covariance matrix and creating a separate GSI module to assimilate satellite profiles may improve results.

  12. Molecular biology of mycoplasmas: from the minimum cell concept to the artificial cell.

    PubMed

    Cordova, Caio M M; Hoeltgebaum, Daniela L; Machado, Laís D P N; Santos, Larissa Dos

    2016-01-01

    Mycoplasmas are a large group of bacteria, sorted into different genera in the Mollicutes class, whose main characteristic in common, besides the small genome, is the absence of cell wall. They are considered cellular and molecular biology study models. We present an updated review of the molecular biology of these model microorganisms and the development of replicative vectors for the transformation of mycoplasmas. Synthetic biology studies inspired by these pioneering works became possible and won the attention of the mainstream media. For the first time, an artificial genome was synthesized (a minimal genome produced from consensus sequences obtained from mycoplasmas). For the first time, a functional artificial cell has been constructed by introducing a genome completely synthesized within a cell envelope of a mycoplasma obtained by transformation techniques. Therefore, this article offers an updated insight to the state of the art of these peculiar organisms' molecular biology.

  13. Artificial neural networks and approximate reasoning for intelligent control in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.

  14. Finite element modelling and updating of a lively footbridge: The complete process

    NASA Astrophysics Data System (ADS)

    Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul

    2007-03-01

    The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This interpretation is often missing in the published literature. It was found that the composite slabs were less stiff than originally assumed and that the asphalt layer contributed considerably to the deck stiffness.

  15. Bayesian Approaches for Model and Multi-mission Satellites Data Fusion

    NASA Astrophysics Data System (ADS)

    Khaki, M., , Dr; Forootan, E.; Awange, J.; Kuhn, M.

    2017-12-01

    Traditionally, data assimilation is formulated as a Bayesian approach that allows one to update model simulations using new incoming observations. This integration is necessary due to the uncertainty in model outputs, which mainly is the result of several drawbacks, e.g., limitations in accounting for the complexity of real-world processes, uncertainties of (unknown) empirical model parameters, and the absence of high resolution (both spatially and temporally) data. Data assimilation, however, requires knowledge of the physical process of a model, which may be either poorly described or entirely unavailable. Therefore, an alternative method is required to avoid this dependency. In this study we present a novel approach which can be used in hydrological applications. A non-parametric framework based on Kalman filtering technique is proposed to improve hydrological model estimates without using a model dynamics. Particularly, we assesse Kalman-Taken formulations that take advantage of the delay coordinate method to reconstruct nonlinear dynamics in the absence of the physical process. This empirical relationship is then used instead of model equations to integrate satellite products with model outputs. We use water storage variables from World-Wide Water Resources Assessment (W3RA) simulations and update them using data known as the Gravity Recovery And Climate Experiment (GRACE) terrestrial water storage (TWS) and also surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) over Australia for the period of 2003 to 2011. The performance of the proposed integration method is compared with data obtained from the more traditional assimilation scheme using the Ensemble Square-Root Filter (EnSRF) filtering technique (Khaki et al., 2017), as well as by evaluating them against ground-based soil moisture and groundwater observations within the Murray-Darling Basin.

  16. Using experimental modal analysis to assess the behaviour of timber elements

    NASA Astrophysics Data System (ADS)

    Kouroussis, Georges; Fekih, Lassaad Ben; Descamps, Thierry

    2018-03-01

    Timber frameworks are one of the most important and widespread types of structures. Their configurations and joints are usually complex and require a high level of craftsmanship to assemble. In the field of restoration, a good understanding of the structural behaviour is necessary and is often based on assessment techniques dedicated to wood characterisation. This paper presents the use of experimental modal analysis for finite element updating. To do this, several timber beams in a free supported condition were analysed in order to extract their bending natural characteristics (frequency, damping and mode shapes). Corresponding ABAQUS finite element models were derived which included the effects of local defects (holes, cracks and wood nodes), moisture and structural decay. To achieve the modal updating, additional simulations were performed in order to study the sensitivity of the mechanical parameters. With the intent to estimate their mechanical properties, a procedure of modal updating was carried out in MatLab with a Python script. This was created to extract the modal information from the ABAQUS modal analysis results to be compared with the experimental results. The updating was based on a minimum of unconstrained multivariable function using a derivative-free method. The objective function was selected from the conventional comparison tools (absolute or relative frequency difference, and/or modal assurance criterion). This testing technique was used to determine the dynamic mechanical properties of timber beams, such as the anisotropic Young's Moduli and damping ratio. To verify the modulus, a series of static 4-point bending tests and STS04 classifications were conducted. The results also revealed that local defects have a negligible influence on natural frequencies. The results demonstrate that this assessment tool offers an effective method to obtain the mechanical properties of timber elements, especially when on-site and non-destructive techniques are needed, for example when retrofitting an existing structure.

  17. Implementation of the US EPA (United States Environmental Protection Agency) Regional Oxidant Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, J.H.

    1984-05-01

    Model design, implementation and quality assurance procedures can have a significant impact on the effectiveness of long term utility of any modeling approach. The Regional Oxidant Modeling System (ROMS) is exceptionally complex because it treats all chemical and physical processes thought to affect ozone concentration on a regional scale. Thus, to effectively illustrate useful design and implementation techniques, this paper describes the general modeling framework which forms the basis of the ROMS. This framework is flexible enough to allow straightforward update or replacement of the chemical kinetics mechanism and/or any theoretical formulations of the physical processes. Use of the Jacksonmore » Structured Programming (JSP) method to implement this modeling framework has not only increased programmer productivity and quality of the resulting programs, but also has provided standardized program design, dynamic documentation, and easily maintainable and transportable code. A summary of the JSP method is presented to encourage modelers to pursue this technique in their own model development efforts. In addition, since data preparation is such an integral part of a successful modeling system, the ROMS processor network is described with emphasis on the internal quality control techniques.« less

  18. Characterization of Orbital Debris via Hyper-Velocity Laboratory-Based Tests

    NASA Technical Reports Server (NTRS)

    Cowardin, Heather; Liou, J.-C.; Anz-Meador, Phillip; Sorge, Marlon; Opiela, John; Fitz-Coy, Norman; Huynh, Tom; Krisko, Paula

    2017-01-01

    Existing DOD and NASA satellite breakup models are based on a key laboratory test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve these models, the NASA Orbital Debris Program Office, in collaboration with the Air Force Space and Missile Systems Center, The Aerospace Corporation, and the University of Florida, replicated a hypervelocity impact using a mock-up satellite, DebriSat, in controlled laboratory conditions. DebriSat is representative of present-day LEO satellites, built with modern spacecraft materials and construction techniques. Fragments down to 2 mm in size will be characterized by their physical and derived properties. A subset of fragments will be further analyzed in laboratory radar and optical facilities to update the existing radar-based NASA Size Estimation Model (SEM) and develop a comparable optical-based SEM. A historical overview of the project, status of the characterization process, and plans for integrating the data into various models will be discussed herein.

  19. Characterization of Orbital Debris via Hyper-Velocity Laboratory-Based Tests

    NASA Technical Reports Server (NTRS)

    Cowardin, Heather; Liou, J.-C.; Krisko, Paula; Opiela, John; Fitz-Coy, Norman; Sorge, Marlon; Huynh, Tom

    2017-01-01

    Existing DoD and NASA satellite breakup models are based on a key laboratory test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve these models, the NASA Orbital Debris Program Office, in collaboration with the Air Force Space and Missile Systems Center, The Aerospace Corporation, and the University of Florida, replicated a hypervelocity impact using a mock-up satellite, DebriSat, in controlled laboratory conditions. DebriSat is representative of present-day LEO satellites, built with modern spacecraft materials and construction techniques. Fragments down to 2 mm in size will be characterized by their physical and derived properties. A subset of fragments will be further analyzed in laboratory radar and optical facilities to update the existing radar-based NASA Size Estimation Model (SEM) and develop a comparable optical-based SEM. A historical overview of the project, status of the characterization process, and plans for integrating the data into various models will be discussed herein.

  20. State estimator for multisensor systems with irregular sampling and time-varying delays

    NASA Astrophysics Data System (ADS)

    Peñarrocha, I.; Sanchis, R.; Romero, J. A.

    2012-08-01

    This article addresses the state estimation in linear time-varying systems with several sensors with different availability, randomly sampled in time and whose measurements have a time-varying delay. The approach is based on a modification of the Kalman filter with the negative-time measurement update strategy, avoiding running back the full standard Kalman filter, the use of full augmented order models or the use of reorganisation techniques, leading to a lower implementation cost algorithm. The update equations are run every time a new measurement is available, independently of the time when it was taken. The approach is useful for networked control systems, systems with long delays and scarce measurements and for out-of-sequence measurements.

  1. DDDAS for space applications

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Pham, Khanh D.; Shen, Dan; Chen, Genshe

    2018-05-01

    The dynamic data-driven applications systems (DDDAS) paradigm is meant to inject measurements into the execution model for enhanced systems performance. One area off interest in DDDAS is for space situation awareness (SSA). For SSA, data is collected about the space environment to determine object motions, environments, and model updates. Dynamically coupling between the data and models enhances the capabilities of each system by complementing models with data for system control, execution, and sensor management. The paper overviews some of the recent developments in SSA made possible from DDDAS techniques which are for object detection, resident space object tracking, atmospheric models for enhanced sensing, cyber protection, and information management.

  2. Certification of a hybrid parameter model of the fully flexible Shuttle Remote Manipulator System

    NASA Technical Reports Server (NTRS)

    Barhorst, Alan A.

    1995-01-01

    The development of high fidelity models of mechanical systems with flexible components is in flux. Many working models of these devices assume the elastic motion is small and can be superimposed on the overall rigid body motion. A drawback associated with this type of modeling technique is that it is required to regenerate the linear modal model of the device if the elastic motion is sufficiently far from the base rigid motion. An advantage to this type of modeling is that it uses NASTRAN modal data which is the NASA standard means of modal information exchange. A disadvantage to the linear modeling is that it fails to accurately represent large motion of the system, unless constant modal updates are performed. In this study, which is a continuation of a project started last year, the drawback of the currently used modal snapshot modeling technique is addressed in a rigorous fashion by novel and easily applied means.

  3. Adaptive control of an exoskeleton robot with uncertainties on kinematics and dynamics.

    PubMed

    Brahmi, Brahim; Saad, Maarouf; Ochoa-Luna, Cristobal; Rahman, Mohammad H

    2017-07-01

    In this paper, we propose a new adaptive control technique based on nonlinear sliding mode control (JSTDE) taking into account kinematics and dynamics uncertainties. This approach is applied to an exoskeleton robot with uncertain kinematics and dynamics. The adaptation design is based on Time Delay Estimation (TDE). The proposed strategy does not necessitate the well-defined dynamic and kinematic models of the system robot. The updated laws are designed using Lyapunov-function to solve the adaptation problem systematically, proving the close loop stability and ensuring the convergence asymptotically of the outputs tracking errors. Experiments results show the effectiveness and feasibility of JSTDE technique to deal with the variation of the unknown nonlinear dynamics and kinematics of the exoskeleton model.

  4. Updating Landsat-derived land-cover maps using change detection and masking techniques

    NASA Technical Reports Server (NTRS)

    Likens, W.; Maw, K.

    1982-01-01

    The California Integrated Remote Sensing System's San Bernardino County Project was devised to study the utilization of a data base at a number of jurisdictional levels. The present paper discusses the implementation of change-detection and masking techniques in the updating of Landsat-derived land-cover maps. A baseline landcover classification was first created from a 1976 image, then the adjusted 1976 image was compared with a 1979 scene by the techniques of (1) multidate image classification, (2) difference image-distribution tails thresholding, (3) difference image classification, and (4) multi-dimensional chi-square analysis of a difference image. The union of the results of methods 1, 3 and 4 was used to create a mask of possible change areas between 1976 and 1979, which served to limit analysis of the update image and reduce comparison errors in unchanged areas. The techniques of spatial smoothing of change-detection products, and of combining results of difference change-detection algorithms are also shown to improve Landsat change-detection accuracies.

  5. Operational Impact of Improved Space Tracking on Collision Avoidance in the Future LEO Space Debris Environment

    NASA Astrophysics Data System (ADS)

    Sibert, D.; Borgeson, D.; Peterson, G.; Jenkin, A.; Sorge, M.

    2010-09-01

    Even if global space policy successfully curtails on orbit explosions and ASAT demonstrations, studies indicate that the number of debris objects in Low Earth Orbit (LEO) will continue to grow solely from debris on debris collisions and debris generated from new launches. This study examines the threat posed by this growing space debris population over the next 30 years and how improvements in our space tracking capabilities can reduce the number of Collision Avoidance (COLA) maneuvers required keep the risk of operational satellite loss within tolerable limits. Particular focus is given to satellites operated by the Department of Defense (DoD) and Intelligence Community (IC) in Low Earth Orbit (LEO). The following debris field and space tracking performance parameters were varied parametrically in the experiment to study the impact on the number of collision avoidance maneuvers required: - Debris Field Density (by year 2009, 2019, 2029, and 2039) - Quality of Track Update (starting 1 sigma error ellipsoid) - Future Propagator Accuracy (error ellipsoid growth rates - Special Perturbations in 3 axes) - Track Update Rate for Debris (stochastic) - Track Update Rate for Payloads (stochastic) Baseline values matching present day tracking performance for quality of track update, propagator accuracy, and track update rate were derived by analyzing updates to the unclassified Satellite Catalog (SatCat). Track update rates varied significantly for active payloads and debris and as such we used different models for the track update rates for military payloads and debris. The analysis was conducted using the System Effectiveness Analysis Simulation (SEAS) an agent based model developed by the United States Air Force Space Command’s Space and Missile Systems Center to evaluate the military utility of space systems. The future debris field was modeled by The Aerospace Corporation using a tool chain which models the growth of the 10cm+ debris field using high fidelity propagation, collision, and breakup models. Our analysis uses Two Line Element (TLE) sets and surface area data generated by this model sampled at the years 2019, 2029, and 2039. Data for the 2009 debris field is taken from the unclassified SatCat. By using Monte Carlo simulation techniques and varying the epoch of the military constellation relative to the debris field we were able to remove the bias of initial conditions. Additional analysis was conducted looking at the military utility impact of temporarily losing the use of Intelligence Surveillance and Reconnaissance (ISR) assets due to COLA maneuvers during a large classified scenario with stressful satellite tasking. This paper and presentation will focus only on unclassified results quantifying the potential reduction in the risk assumed by satellite flyers, and the potential reduction in Delta-V usage that is possible if we are able to improve our tracking performance in any of these three areas and reduce the positional uncertainty of space objects at the time of closest approach.

  6. Parameter identification of material constants in a composite shell structure

    NASA Technical Reports Server (NTRS)

    Martinez, David R.; Carne, Thomas G.

    1988-01-01

    One of the basic requirements in engineering analysis is the development of a mathematical model describing the system. Frequently comparisons with test data are used as a measurement of the adequacy of the model. An attempt is typically made to update or improve the model to provide a test verified analysis tool. System identification provides a systematic procedure for accomplishing this task. The terms system identification, parameter estimation, and model correlation all refer to techniques that use test information to update or verify mathematical models. The goal of system identification is to improve the correlation of model predictions with measured test data, and produce accurate, predictive models. For nonmetallic structures the modeling task is often difficult due to uncertainties in the elastic constants. A finite element model of the shell was created, which included uncertain orthotropic elastic constants. A modal survey test was then performed on the shell. The resulting modal data, along with the finite element model of the shell, were used in a Bayes estimation algorithm. This permitted the use of covariance matrices to weight the confidence in the initial parameter values as well as confidence in the measured test data. The estimation procedure also employed the concept of successive linearization to obtain an approximate solution to the original nonlinear estimation problem.

  7. Timing Interactions in Social Simulations: The Voter Model

    NASA Astrophysics Data System (ADS)

    Fernández-Gracia, Juan; Eguíluz, Víctor M.; Miguel, Maxi San

    The recent availability of huge high resolution datasets on human activities has revealed the heavy-tailed nature of the interevent time distributions. In social simulations of interacting agents the standard approach has been to use Poisson processes to update the state of the agents, which gives rise to very homogeneous activity patterns with a well defined characteristic interevent time. As a paradigmatic opinion model we investigate the voter model and review the standard update rules and propose two new update rules which are able to account for heterogeneous activity patterns. For the new update rules each node gets updated with a probability that depends on the time since the last event of the node, where an event can be an update attempt (exogenous update) or a change of state (endogenous update). We find that both update rules can give rise to power law interevent time distributions, although the endogenous one more robustly. Apart from that for the exogenous update rule and the standard update rules the voter model does not reach consensus in the infinite size limit, while for the endogenous update there exist a coarsening process that drives the system toward consensus configurations.

  8. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  9. Updated Principle of Corresponding States

    ERIC Educational Resources Information Center

    Ben-Amotz, Dor; Gift, Alan D.; Levine, R. D.

    2004-01-01

    The rule of corresponding states, which shows the connection between the thermodynamic properties of various liquids is re-examined. The overall likeness is observed by using an updated scaling technique of Lennard-Jones corresponding states (LJ-CS).

  10. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  11. Power Management and Distribution (PMAD) Model Development: Final Report

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    2011-01-01

    Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.

  12. Evaluating uncertainties in multi-layer soil moisture estimation with support vector machines and ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Liu, Di; Mishra, Ashok K.; Yu, Zhongbo

    2016-07-01

    This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).

  13. LANDFIRE 2015 Remap – Utilization of Remotely Sensed Data to Classify Existing Vegetation Type and Structure to Support Strategic Planning and Tactical Response

    USGS Publications Warehouse

    Picotte, Joshua J.; Long, Jordan; Peterson, Birgit; Nelson, Kurtis

    2017-01-01

    The LANDFIRE Program produces national scale vegetation, fuels, fire regimes, and landscape disturbance data for the entire U.S. These data products have been used to model the potential impacts of fire on the landscape [1], the wildfire risks associated with land and resource management [2, 3], and those near population centers and accompanying Wildland Urban Interface zones [4], as well as many other applications. The initial LANDFIRE National Existing Vegetation Type (EVT) and vegetation structure layers, including vegetation percent cover and height, were mapped circa 2001 and released in 2009 [5]. Each EVT is representative of the dominant plant community within a given area. The EVT layer has since been updated by identifying areas of landscape change and modifying the vegetation types utilizing a series of rules that consider the disturbance type, severity of disturbance, and time since disturbance [6, 7]. Non-disturbed areas were adjusted for vegetation growth and succession. LANDFIRE vegetation structure layers also have been updated by using data modeling techniques [see 6 for a full description]. The subsequent updated versions of LANDFIRE include LANDFIRE 2008, 2010, 2012, and LANDFIRE 2014 is being incrementally released, with all data being released in early 2017. Additionally, a comprehensive remap of the baseline data, LANDFIRE 2015 Remap, is being prototyped, and production is tentatively planned to begin in early 2017 to provide a more current baseline for future updates.

  14. A review of statistical updating methods for clinical prediction models.

    PubMed

    Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew

    2018-01-01

    A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.

  15. OAO battery data analysis

    NASA Technical Reports Server (NTRS)

    Gaston, S.; Wertheim, M.; Orourke, J. A.

    1973-01-01

    Summary, consolidation and analysis of specifications, manufacturing process and test controls, and performance results for OAO-2 and OAO-3 lot 20 Amp-Hr sealed nickel cadmium cells and batteries are reported. Correlation of improvements in control requirements with performance is a key feature. Updates for a cell/battery computer model to improve performance prediction capability are included. Applicability of regression analysis computer techniques to relate process controls to performance is checked.

  16. State updating and calibration period selection to improve dynamic monthly streamflow forecasts for an environmental flow management application

    NASA Astrophysics Data System (ADS)

    Gibbs, Matthew S.; McInerney, David; Humphrey, Greer; Thyer, Mark A.; Maier, Holger R.; Dandy, Graeme C.; Kavetski, Dmitri

    2018-02-01

    Monthly to seasonal streamflow forecasts provide useful information for a range of water resource management and planning applications. This work focuses on improving such forecasts by considering the following two aspects: (1) state updating to force the models to match observations from the start of the forecast period, and (2) selection of a shorter calibration period that is more representative of the forecast period, compared to a longer calibration period traditionally used. The analysis is undertaken in the context of using streamflow forecasts for environmental flow water management of an open channel drainage network in southern Australia. Forecasts of monthly streamflow are obtained using a conceptual rainfall-runoff model combined with a post-processor error model for uncertainty analysis. This model set-up is applied to two catchments, one with stronger evidence of non-stationarity than the other. A range of metrics are used to assess different aspects of predictive performance, including reliability, sharpness, bias and accuracy. The results indicate that, for most scenarios and metrics, state updating improves predictive performance for both observed rainfall and forecast rainfall sources. Using the shorter calibration period also improves predictive performance, particularly for the catchment with stronger evidence of non-stationarity. The results highlight that a traditional approach of using a long calibration period can degrade predictive performance when there is evidence of non-stationarity. The techniques presented can form the basis for operational monthly streamflow forecasting systems and provide support for environmental decision-making.

  17. Building a foundation for continued dialogue between climate science and water resource communities

    NASA Astrophysics Data System (ADS)

    Vano, J. A.; Arnold, J.; Clark, M. P.; Gutmann, E. D.; Hamman, J.; Nijssen, B.; Wood, A.

    2017-12-01

    Research into climate change has led to the development of many global climate models, downscaling techniques, and impacts models. This proliferation of information has resulted in insights into how climate change will impact hydrology that are more robust than any single approach, which is helpful for advancing the science. However, the variety of approaches makes navigating what information to use in water resource planning and management challenging. Each technique has strengths and weaknesses and associated uncertainties, and approaches are always being updated. Here we provide a user-focused, modularly framed guidance that is designed to be expandable and where updates can be targeted. This includes describing dos and don'ts for how to use climate change information in water resource planning and management that can be read at multiple levels. It can provide context for those seeking to understand the general need, opportunities, and challenges of including climate change information. It also provides details (frequently asked questions and examples) and direction to further guidance and resources for those engaged in the technical work. This guidance is intended to provide a foundation for continued dialogue within and between the climate science and application communities, to increase the utility and appropriate use of climate change information.

  18. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  19. A stochastic approach for model reduction and memory function design in hydrogeophysical inversion

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Kellogg, A.; Terry, N.

    2009-12-01

    Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.

  20. Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.

    PubMed

    Fernández-Gracia, J; Eguíluz, V M; San Miguel, M

    2011-07-01

    We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.

  1. The influence of parametric and external noise in act-and-wait control with delayed feedback.

    PubMed

    Wang, Jiaxing; Kuske, Rachel

    2017-11-01

    We apply several novel semi-analytic approaches for characterizing and calculating the effects of noise in a system with act-and-wait control. For concrete illustration, we apply these to a canonical balance model for an inverted pendulum to study the combined effect of delay and noise within the act-and-wait setting. While the act-and-wait control facilitates strong stabilization through deadbeat control, a comparison of different models with continuous vs. discrete updating of the control strategy in the active period illustrates how delays combined with the imprecise application of the control can seriously degrade the performance. We give several novel analyses of a generalized act-and-wait control strategy, allowing flexibility in the updating of the control strategy, in order to understand the sensitivities to delays and random fluctuations. In both the deterministic and stochastic settings, we give analytical and semi-analytical results that characterize and quantify the dynamics of the system. These results include the size and shape of stability regions, densities for the critical eigenvalues that capture the rate of reaching the desired stable equilibrium, and amplification factors for sustained fluctuations in the context of external noise. They also provide the dependence of these quantities on the length of the delay and the active period. In particular, we see that the combined influence of delay, parametric error, or external noise and on-off control can qualitatively change the dynamics, thus reducing the robustness of the control strategy. We also capture the dependence on how frequently the control is updated, allowing an interpolation between continuous and frequent updating. In addition to providing insights for these specific models, the methods we propose are generalizable to other settings with noise, delay, and on-off control, where analytical techniques are otherwise severely scarce.

  2. Real-time projections of cholera outbreaks through data assimilation and rainfall forecasting

    NASA Astrophysics Data System (ADS)

    Pasetto, Damiano; Finger, Flavio; Rinaldo, Andrea; Bertuzzo, Enrico

    2017-10-01

    Although treatment for cholera is well-known and cheap, outbreaks in epidemic regions still exact high death tolls mostly due to the unpreparedness of health care infrastructures to face unforeseen emergencies. In this context, mathematical models for the prediction of the evolution of an ongoing outbreak are of paramount importance. Here, we test a real-time forecasting framework that readily integrates new information as soon as available and periodically issues an updated forecast. The spread of cholera is modeled by a spatially-explicit scheme that accounts for the dynamics of susceptible, infected and recovered individuals hosted in different local communities connected through hydrologic and human mobility networks. The framework presents two major innovations for cholera modeling: the use of a data assimilation technique, specifically an ensemble Kalman filter, to update both state variables and parameters based on the observations, and the use of rainfall forecasts to force the model. The exercise of simulating the state of the system and the predictive capabilities of the novel tools, set at the initial phase of the 2010 Haitian cholera outbreak using only information that was available at that time, serves as a benchmark. Our results suggest that the assimilation procedure with the sequential update of the parameters outperforms calibration schemes based on Markov chain Monte Carlo. Moreover, in a forecasting mode the model usefully predicts the spatial incidence of cholera at least one month ahead. The performance decreases for longer time horizons yet allowing sufficient time to plan for deployment of medical supplies and staff, and to evaluate alternative strategies of emergency management.

  3. Automatic determination of fault effects on aircraft functionality

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1989-01-01

    The problem of determining the behavior of physical systems subsequent to the occurrence of malfunctions is discussed. It is established that while it was reasonable to assume that the most important fault behavior modes of primitive components and simple subsystems could be known and predicted, interactions within composite systems reached levels of complexity that precluded the use of traditional rule-based expert system techniques. Reasoning from first principles, i.e., on the basis of causal models of the physical system, was required. The first question that arises is, of course, how the causal information required for such reasoning should be represented. The bond graphs presented here occupy a position intermediate between qualitative and quantitative models, allowing the automatic derivation of Kuipers-like qualitative constraint models as well as state equations. Their most salient feature, however, is that entities corresponding to components and interactions in the physical system are explicitly represented in the bond graph model, thus permitting systematic model updates to reflect malfunctions. Researchers show how this is done, as well as presenting a number of techniques for obtaining qualitative information from the state equations derivable from bond graph models. One insight is the fact that one of the most important advantages of the bond graph ontology is the highly systematic approach to model construction it imposes on the modeler, who is forced to classify the relevant physical entities into a small number of categories, and to look for two highly specific types of interactions among them. The systematic nature of bond graph model construction facilitates the process to the point where the guidelines are sufficiently specific to be followed by modelers who are not domain experts. As a result, models of a given system constructed by different modelers will have extensive similarities. Researchers conclude by pointing out that the ease of updating bond graph models to reflect malfunctions is a manifestation of the systematic nature of bond graph construction, and the regularity of the relationship between bond graph models and physical reality.

  4. Forum for Injection Technique and Therapy Expert Recommendations, India: The Indian Recommendations for Best Practice in Insulin Injection Technique, 2017

    PubMed Central

    Tandon, Nikhil; Kalra, Sanjay; Balhara, Yatan Pal Singh; Baruah, Manash P.; Chadha, Manoj; Chandalia, Hemraj B.; Prasanna Kumar, K. M.; Madhu, S. V.; Mithal, Ambrish; Sahay, Rakesh; Shukla, Rishi; Sundaram, Annamalai; Unnikrishnan, Ambika G.; Saboo, Banshi; Gupta, Vandita; Chowdhury, Subhankar; Kesavadev, Jothydev; Wangnoo, Subhash K.

    2017-01-01

    Health-care professionals in India frequently manage injection or infusion therapies in persons with diabetes (PWD). Patients taking insulin should know the importance of proper needle size, correct injection process, complication avoidance, and all other aspects of injection technique from the first visit onward. To assist health-care practitioners in their clinical practice, Forum for Injection Technique and Therapy Expert Recommendations, India, has updated the practical advice and made it more comprehensive evidence-based best practice information. Adherence to these updated recommendations, learning, and translating them into clinical practice should lead to effective therapies, improved outcomes, and lower costs for PWD. PMID:28670547

  5. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  6. Comparisons of Predictions of the XB-70-1 Longitudinal Stability and Control Derivatives with Flight Results for Six Flight Conditions

    NASA Technical Reports Server (NTRS)

    Wolowicz, C. H.; Yancey, R. B.

    1973-01-01

    Preliminary correlations of flight-determined and predicted stability and control characteristics of the XB-70-1 reported in NASA TN D-4578 were subject to uncertainties in several areas which necessitated a review of prediction techniques particularly for the longitudinal characteristics. Reevaluation and updating of the original predictions, including aeroelastic corrections, for six specific flight-test conditions resulted in improved correlations of static pitch stability with flight data. The original predictions for the pitch-damping derivative, on the other hand, showed better correlation with flight data than the updated predictions. It appears that additional study is required in the application of aeroelastic corrections to rigid model wind-tunnel data and the theoretical determination of dynamic derivatives for this class of aircraft.

  7. A Systems Approach to Costing in the Blood Bank

    PubMed Central

    Delon, Gerald L.; Smalley, Harold E.

    1969-01-01

    A macroscopic approach to departmental cost finding is combined with a microscopic approach to the weighting of laboratory tests in a mathematical model which, when incorporated into a relative unit value format, yields unit costs for such tests under a wide variety of operational conditions. The task of updating such costs to reflect changing conditions can be facilitated by a computer program incorporating the capability of pricing the various tests to achieve any desired profit or loss or to break even. Among other potential uses of such a technique, the effects on unit cost per test caused by increasing or decreasing the number of technicians or the volume of tests can be systematically examined, and pricing can be updated each year as hospital costs change. PMID:5799486

  8. Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.

    2009-08-07

    This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less

  9. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  10. Identification of cracks in thick beams with a cracked beam element model

    NASA Astrophysics Data System (ADS)

    Hou, Chuanchuan; Lu, Yong

    2016-12-01

    The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.

  11. A self-cognizant dynamic system approach for prognostics and health management

    NASA Astrophysics Data System (ADS)

    Bai, Guangxing; Wang, Pingfeng; Hu, Chao

    2015-03-01

    Prognostics and health management (PHM) is an emerging engineering discipline that diagnoses and predicts how and when a system will degrade its performance and lose its partial or whole functionality. Due to the complexity and invisibility of rules and states of most dynamic systems, developing an effective approach to track evolving system states becomes a major challenge. This paper presents a new self-cognizant dynamic system (SCDS) approach that incorporates artificial intelligence into dynamic system modeling for PHM. A feed-forward neural network (FFNN) is selected to approximate a complex system response which is challenging task in general due to inaccessible system physics. The trained FFNN model is then embedded into a dual extended Kalman filter algorithm to track down system dynamics. A recursive computation technique used to update the FFNN model using online measurements is also derived. To validate the proposed SCDS approach, a battery dynamic system is considered as an experimental application. After modeling the battery system by a FFNN model and a state-space model, the state-of-charge (SoC) and state-of-health (SoH) are estimated by updating the FFNN model using the proposed approach. Experimental results suggest that the proposed approach improves the efficiency and accuracy for battery health management.

  12. TOMS and SBUV Data: Comparison to 3D Chemical-Transport Model Results

    NASA Technical Reports Server (NTRS)

    Stolarski, Richard S.; Douglass, Anne R.; Steenrod, Steve; Frith, Stacey

    2003-01-01

    We have updated our merged ozone data (MOD) set using the TOMS data from the new version 8 algorithm. We then analyzed these data for contributions from solar cycle, volcanoes, QBO, and halogens using a standard statistical time series model. We have recently completed a hindcast run of our 3D chemical-transport model for the same years. This model uses off-line winds from the finite-volume GCM, a full stratospheric photochemistry package, and time-varying forcing due to halogens, solar uv, and volcanic aerosols. We will report on a parallel analysis of these model results using the same statistical time series technique as used for the MOD data.

  13. Anatomy and Pathophysiology of Spinal Cord Injury Associated With Regional Anesthesia and Pain Medicine: 2015 Update.

    PubMed

    Neal, Joseph M; Kopp, Sandra L; Pasternak, Jeffrey J; Lanier, William L; Rathmell, James P

    2015-01-01

    In March 2012, the American Society of Regional Anesthesia and Pain Medicine convened its second Practice Advisory on Neurological Complications in Regional Anesthesia and Pain Medicine. This update is based on the proceedings of that conference and relevant information published since its conclusion. This article updates previously described information on the pathophysiology of spinal cord injury and adds new material on spinal stenosis, blood pressure control during neuraxial blockade, neuraxial injury subsequent to transforaminal procedures, cauda equina syndrome/local anesthetic neurotoxicity/arachnoiditis, and performing regional anesthetic or pain medicine procedures in patients concomitantly receiving general anesthesia or deep sedation. Recommendations are based on extensive review of research on humans or employing animal models, case reports, pathophysiology research, and expert opinion. The pathophysiology of spinal cord injury associated with regional anesthetic techniques is reviewed in depth, including that related to mechanical trauma from direct needle/catheter injury or mass lesions, spinal cord ischemia or vascular injury from direct needle/catheter trauma, and neurotoxicity from local anesthetics, adjuvants, or antiseptics. Specific recommendations are offered that may reduce the likelihood of spinal cord injury associated with regional anesthetic or interventional pain medicine techniques. The practice advisory's recommendations may, in select cases, reduce the likelihood of injury. However, many of the described injuries are neither predictable nor preventable based on our current state of knowledge. Since publication of initial recommendations in 2008, new information has enhanced our understanding of 5 specific entities: spinal stenosis, blood pressure control during neuraxial anesthesia, neuraxial injury subsequent to transforaminal techniques, cauda equina syndrome/local anesthetic neurotoxicity/arachnoiditis, and performing regional anesthetic or pain procedures in patients concomitantly receiving general anesthesia or deep sedation.

  14. Data assimilation for groundwater flow modelling using Unbiased Ensemble Square Root Filter: Case study in Guantao, North China Plain

    NASA Astrophysics Data System (ADS)

    Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.

    2017-12-01

    Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.

  15. Combining Relevance Vector Machines and exponential regression for bearing residual life estimation

    NASA Astrophysics Data System (ADS)

    Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico

    2012-08-01

    In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.

  16. Hybrid Kalman Filter: A New Approach for Aircraft Engine In-Flight Diagnostics

    NASA Technical Reports Server (NTRS)

    Kobayashi, Takahisa; Simon, Donald L.

    2006-01-01

    In this paper, a uniquely structured Kalman filter is developed for its application to in-flight diagnostics of aircraft gas turbine engines. The Kalman filter is a hybrid of a nonlinear on-board engine model (OBEM) and piecewise linear models. The utilization of the nonlinear OBEM allows the reference health baseline of the in-flight diagnostic system to be updated to the degraded health condition of the engines through a relatively simple process. Through this health baseline update, the effectiveness of the in-flight diagnostic algorithm can be maintained as the health of the engine degrades over time. Another significant aspect of the hybrid Kalman filter methodology is its capability to take advantage of conventional linear and nonlinear Kalman filter approaches. Based on the hybrid Kalman filter, an in-flight fault detection system is developed, and its diagnostic capability is evaluated in a simulation environment. Through the evaluation, the suitability of the hybrid Kalman filter technique for aircraft engine in-flight diagnostics is demonstrated.

  17. Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, mercedes C.

    2006-01-01

    The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.

  18. Fundamentals and techniques of nonimaging optics for solar energy concentration

    NASA Astrophysics Data System (ADS)

    Winston, R.; Gallagher, J. J.

    1980-05-01

    The properties of a variety of new and previously known nonimaging optical configurations were investigated. A thermodynamic model which explains quantitatively the enhancement of effective absorptance of gray body receivers through cavity effects was developed. The classic method of Liu and Jordan, which allows one to predict the diffuse sunlight levels through correlation with the total and direct fraction was revised and updated and applied to predict the performance of nonimaging solar collectors. The conceptual design for an optimized solar collector which integrates the techniques of nonimaging concentration with evacuated tube collector technology was carried out and is presently the basis for a separately funded hardware development project.

  19. Recommender engine for continuous-time quantum Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  20. Low cost management of replicated data in fault-tolerant distributed systems

    NASA Technical Reports Server (NTRS)

    Joseph, Thomas A.; Birman, Kenneth P.

    1990-01-01

    Many distributed systems replicate data for fault tolerance or availability. In such systems, a logical update on a data item results in a physical update on a number of copies. The synchronization and communication required to keep the copies of replicated data consistent introduce a delay when operations are performed. A technique is described that relaxes the usual degree of synchronization, permitting replicated data items to be updated concurrently with other operations, while at the same time ensuring that correctness is not violated. The additional concurrency thus obtained results in better response time when performing operations on replicated data. How this technique performs in conjunction with a roll-back and a roll-forward failure recovery mechanism is also discussed.

  1. Finite element model correlation of a composite UAV wing using modal frequencies

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.

    2007-04-01

    The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.

  2. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  3. Experimental liver fibrosis research: update on animal models, legal issues and translational aspects

    PubMed Central

    2013-01-01

    Liver fibrosis is defined as excessive extracellular matrix deposition and is based on complex interactions between matrix-producing hepatic stellate cells and an abundance of liver-resident and infiltrating cells. Investigation of these processes requires in vitro and in vivo experimental work in animals. However, the use of animals in translational research will be increasingly challenged, at least in countries of the European Union, because of the adoption of new animal welfare rules in 2013. These rules will create an urgent need for optimized standard operating procedures regarding animal experimentation and improved international communication in the liver fibrosis community. This review gives an update on current animal models, techniques and underlying pathomechanisms with the aim of fostering a critical discussion of the limitations and potential of up-to-date animal experimentation. We discuss potential complications in experimental liver fibrosis and provide examples of how the findings of studies in which these models are used can be translated to human disease and therapy. In this review, we want to motivate the international community to design more standardized animal models which might help to address the legally requested replacement, refinement and reduction of animals in fibrosis research. PMID:24274743

  4. Update of guidelines for surgical endodontics - the position after ten years.

    PubMed

    Evans, G E; Bishop, K; Renton, T

    2012-05-25

    This is the first of a series of articles, which will summarise new or updated clinical guidelines produced by the Clinical Standards Committee of the Faculty of Dental Surgery, Royal College of Surgeons of England (FDSRCS). Important developments for the dental profession from a number of clinical guidelines will be presented, commencing with the Guidelines for surgical endodontics. The impact of recent evidence relating to the outcome of surgical endodontics and techniques such as cone beam computed tomography and microsurgical techniques are considered.

  5. The evolution and discharge of electric fields within a thunderstorm

    NASA Technical Reports Server (NTRS)

    Hager, William W.; Nisbet, John S.; Kasha, John R.

    1989-01-01

    An analysis of the present three-dimensional thunderstorm electrical model and its finite-difference approximations indicates unconditional stability for the discretization that results from the approximation of the spatial derivatives by a box-schemelike method and of the temporal derivative by either a backward-difference or Crank-Nicholson scheme. Lightning propagation is treated through numerical techniques based on the inverse-matrix modification formula and Cholesky updates. The model is applied to a storm observed at the Kennedy Space Center in 1978, and numerical comparisons are conducted between the model and the theoretical results obtained by Wilson (1920) and Holzer and Saxon (1952).

  6. Off-Highway Gasoline Consuption Estimation Models Used in the Federal Highway Administration Attribution Process: 2008 Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Ho-Ling; Davis, Stacy Cagle

    2009-12-01

    This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the secondmore » major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that is possible on the overall totals, to the current FHWA estimates. Because NONROAD2005 model was designed for emission estimation purposes (i.e., not for measuring fuel consumption), it covers different equipment populations from those the FHWA models were based on. Thus, a direct comparison generally was not possible in most sectors. As a result, NONROAD2005 data were not used in the 2008 update of the FHWA off-highway models. The quality of fuel use estimates directly affect the data quality in many tables published in the Highway Statistics. Although updates have been made to the Off-Highway Gasoline Use Model and the Public Use Gasoline Model, some challenges remain due to aging model equations and discontinuation of data sources.« less

  7. Prediction-error variance in Bayesian model updating: a comparative study

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.

  8. The Site-Scale Saturated Zone Flow Model for Yucca Mountain

    NASA Astrophysics Data System (ADS)

    Al-Aziz, E.; James, S. C.; Arnold, B. W.; Zyvoloski, G. A.

    2006-12-01

    This presentation provides a reinterpreted conceptual model of the Yucca Mountain site-scale flow system subject to all quality assurance procedures. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain, which is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. This effort started from the ground up with a revised and updated hydrogeologic framework model, which incorporates the latest lithology data, and increased grid resolution that better resolves the hydrogeologic framework, which was updated throughout the model domain. In addition, faults are much better represented using the 250× 250- m2 spacing (compared to the previous model's 500× 500-m2 spacing). Data collected since the previous model calibration effort have been included and they comprise all Nye County water-level data through Phase IV of their Early Warning Drilling Program. Target boundary fluxes are derived from the newest (2004) Death Valley Regional Flow System model from the US Geologic Survey. A consistent weighting scheme assigns importance to each measured water-level datum and boundary flux extracted from the regional model. The numerical model is calibrated by matching these weighted water level measurements and boundary fluxes using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM~v2.24 and parameter estimation software PEST~v5.5) and model setup facilitates efficient calibration of multiple conceptual models. Analyses evaluate the impact of these updates and additional data on the modeled potentiometric surface and the flowpaths emanating from below the repository. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the proposed repository and compare them to those from the previous model calibration. Specific discharge at a point 5~km from the repository is also examined and found to be within acceptable uncertainty. The results show that updated model yields a calibration with smaller residuals than the previous model revision while ensuring that flowpaths follow measured gradients and paths derived from hydrochemical analyses. This work was supported by the Yucca Mountain Site Characterization Office as part of the Civilian Radioactive Waste Management Program, which is managed by the U.S. Department of Energy, Yucca Mountain Site Characterization Project. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.

  9. Study of modeling aspects of long period fiber grating using three-layer fiber geometry

    NASA Astrophysics Data System (ADS)

    Singh, Amit

    2015-03-01

    The author studied and demonstrated the various modeling aspects of long period fiber grating (LPFG) such as the core effective index, cladding effective index, coupling coefficient, coupled mode theory, and transmission spectrum of the LPFG using three-layer fiber geometry. Actually, there are two different techniques used for theoretical modeling of the long period fiber grating. The first technique was used by Vengsarkar et al who described the phenomenon of long-period fiber gratings, and the second technique was reported by Erdogan who revealed the inaccuracies and shortcomings of the original method, thereby providing an accurate and updated alternative. The main difference between these two different approaches lies in their fiber geometry. Venserkar et al used two-layer fiber geometry which is simple but employs weakly guided approximation, whereas Erdogan used three-layer fiber geometry which is complex but also the most accurate technique for theoretical study of the LPFG. The author further discussed about the behavior of the transmission spectrum by altering different grating parameters such as the grating length, ultraviolet (UV) induced-index change, and grating period to achieve the desired flexibility. The author simulated the various results with the help of MATLAB.

  10. An Updated Perspective of Single Event Gate Rupture and Single Event Burnout in Power MOSFETs

    NASA Astrophysics Data System (ADS)

    Titus, Jeffrey L.

    2013-06-01

    Studies over the past 25 years have shown that heavy ions can trigger catastrophic failure modes in power MOSFETs [e.g., single-event gate rupture (SEGR) and single-event burnout (SEB)]. In 1996, two papers were published in a special issue of the IEEE Transaction on Nuclear Science [Johnson, Palau, Dachs, Galloway and Schrimpf, “A Review of the Techniques Used for Modeling Single-Event Effects in Power MOSFETs,” IEEE Trans. Nucl. Sci., vol. 43, no. 2, pp. 546-560, April. 1996], [Titus and Wheatley, “Experimental Studies of Single-Event Gate Rupture and Burnout in Vertical Power MOSFETs,” IEEE Trans. Nucl. Sci., vol. 43, no. 2, pp. 533-545, Apr. 1996]. Those two papers continue to provide excellent information and references with regard to SEB and SEGR in vertical planar MOSFETs. This paper provides updated references/information and provides an updated perspective of SEB and SEGR in vertical planar MOSFETs as well as provides references/information to other device types that exhibit SEB and SEGR effects.

  11. Assessing the performance of eight real-time updating models and procedures for the Brosna River

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.

    2005-10-01

    The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.

  12. An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry

    NASA Astrophysics Data System (ADS)

    Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul

    2013-12-01

    The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.

  13. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and simulation tool, and completion of service requests from a broad end user consortium including Operations, Science and Technology Planning, and Exploration Planning. CONCLUSIONS: The VVC approach established by the IMM project of combining the IMM VV Plan with 7009 requirements is comprehensive and includes the involvement of end users at every stage in IMM evolution. Methods and techniques used to quantify the VVC status of the IMM have not only received approval from the local NASA community but have also garnered recognition by other federal agencies seeking to develop similar guidelines in the medical modeling community.

  14. Planned updates and refinements to the central valley hydrologic model, with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    USGS Publications Warehouse

    Faunt, C.C.; Hanson, R.T.; Martin, P.; Schmid, W.

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence. ?? 2011 ASCE.

  15. Planned updates and refinements to the Central Valley hydrologic model with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    USGS Publications Warehouse

    Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.

  16. Generating Inviscid and Viscous Fluid Flow Simulations over a Surface Using a Quasi-simultaneous Technique

    NASA Technical Reports Server (NTRS)

    Sturdza, Peter (Inventor); Martins-Rivas, Herve (Inventor); Suzuki, Yoshifumi (Inventor)

    2014-01-01

    A fluid-flow simulation over a computer-generated surface is generated using a quasi-simultaneous technique. The simulation includes a fluid-flow mesh of inviscid and boundary-layer fluid cells. An initial fluid property for an inviscid fluid cell is determined using an inviscid fluid simulation that does not simulate fluid viscous effects. An initial boundary-layer fluid property a boundary-layer fluid cell is determined using the initial fluid property and a viscous fluid simulation that simulates fluid viscous effects. An updated boundary-layer fluid property is determined for the boundary-layer fluid cell using the initial fluid property, initial boundary-layer fluid property, and an interaction law. The interaction law approximates the inviscid fluid simulation using a matrix of aerodynamic influence coefficients computed using a two-dimensional surface panel technique and a fluid-property vector. An updated fluid property is determined for the inviscid fluid cell using the updated boundary-layer fluid property.

  17. A constrained modulus reconstruction technique for breast cancer assessment.

    PubMed

    Samani, A; Bishop, J; Plewes, D B

    2001-09-01

    A reconstruction technique for breast tissue elasticity modulus is described. This technique assumes that the geometry of normal and suspicious tissues is available from a contrast-enhanced magnetic resonance image. Furthermore, it is assumed that the modulus is constant throughout each tissue volume. The technique, which uses quasi-static strain data, is iterative where each iteration involves modulus updating followed by stress calculation. Breast mechanical stimulation is assumed to be done by two compressional rigid plates. As a result, stress is calculated using the finite element method based on the well-controlled boundary conditions of the compression plates. Using the calculated stress and the measured strain, modulus updating is done element-by-element based on Hooke's law. Breast tissue modulus reconstruction using simulated data and phantom modulus reconstruction using experimental data indicate that the technique is robust.

  18. Obtaining manufactured geometries of deep-drawn components through a model updating procedure using geometric shape parameters

    NASA Astrophysics Data System (ADS)

    Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan

    2018-01-01

    The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.

  19. Updates on measurements and modeling techniques for expendable countermeasures

    NASA Astrophysics Data System (ADS)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  20. Proactive Security Testing and Fuzzing

    NASA Astrophysics Data System (ADS)

    Takanen, Ari

    Software is bound to have security critical flaws, and no testing or code auditing can ensure that software is flaw-less. But software security testing requirements have improved radically during the past years, largely due to criticism from security conscious consumers and Enterprise customers. Whereas in the past, security flaws were taken for granted (and patches were quietly and humbly installed), they now are probably one of the most common reasons why people switch vendors or software providers. The maintenance costs from security updates often add to become one of the biggest cost items to large Enterprise users. Fortunately test automation techniques have also improved. Techniques like model-based testing (MBT) enable efficient generation of security tests that reach good confidence levels in discovering zero-day mistakes in software. This technique is called fuzzing.

  1. Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Michael; Elgowainy, Amgad; Han, Jeongwoo

    This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.

  2. Landsat Imagery: A Tool for Updating Land Use in Gulf Coast Mexico

    ERIC Educational Resources Information Center

    Harnapp, Vern

    1978-01-01

    Explores the use of Landsat imagery in mapping and updating land use for the purpose of planning. Examines Gulf Coast Mexico as a case study, because modern agricultural techniques used to expand the ranching industry have significantly altered the landscape. (Author/BC)

  3. Updates to the Demographic and Spatial Allocation Models to ...

    EPA Pesticide Factsheets

    EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing development scenarios up to 2100. This newest version includes updated population and land use data sets and addresses limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide (Final Report) describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.

  4. Inverts permittivity and conductivity with structural constraint in GPR FWI based on truncated Newton method

    NASA Astrophysics Data System (ADS)

    Ren, Qianci

    2018-04-01

    Full waveform inversion (FWI) of ground penetrating radar (GPR) is a promising technique to quantitatively evaluate the permittivity and conductivity of near subsurface. However, these two parameters are simultaneously inverted in the GPR FWI, increasing the difficulty to obtain accurate inversion results for both parameters. In this study, I present a structural constrained GPR FWI procedure to jointly invert the two parameters, aiming to force a structural relationship between permittivity and conductivity in the process of model reconstruction. The structural constraint is enforced by a cross-gradient function. In this procedure, the permittivity and conductivity models are inverted alternately at each iteration and updated with hierarchical frequency components in the frequency domain. The joint inverse problem is solved by the truncated Newton method which considering the effect of Hessian operator and using the approximated solution of Newton equation to be the perturbation model in the updating process. The joint inversion procedure is tested by three synthetic examples. The results show that jointly inverting permittivity and conductivity in GPR FWI effectively increases the structural similarities between the two parameters, corrects the structures of parameter models, and significantly improves the accuracy of conductivity model, resulting in a better inversion result than the individual inversion.

  5. Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.

    2004-01-01

    This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.

  6. Detection of Earthquake-Induced Damage in a Framed Structure Using a Finite Element Model Updating Procedure

    PubMed Central

    Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun

    2014-01-01

    Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage. PMID:24574888

  7. Current Management of Urethral Stricture

    PubMed Central

    Lee, Young Ju

    2013-01-01

    The surgical treatment of urethral stricture diseases is continually evolving. Although various surgical techniques are available for the treatment of anterior urethral stricture, no one technique has been identified as the method of choice. This article provides a brief updated review of the surgical options for the management of different sites and different types of anterior urethral stricture. This review also covers present controversies in urethral reconstruction. Among the various procedures available for treating urethral stricture, one-stage buccal mucosal graft urethroplasty is currently widely used. The choice of technique for urethroplasty for an individual case largely depends on the expertise of the surgeon. Therefore, urologists working in this field should keep themselves updated on the numerous surgical techniques to deal with any condition of the urethra that might surface at the time of surgery. PMID:24044088

  8. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  9. Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2013)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David C.; Syamlal, Madhava; Cottrell, Roger

    2013-09-30

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools asmore » necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories’ core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI’s industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI’s academic participants (Carnegie Mellon University, Princeton University, West Virginia University, Boston University and the University of Texas at Austin) bring unparalleled expertise in multiphase flow reactors, combustion, process synthesis and optimization, planning and scheduling, and process control techniques for energy processes. During Fiscal Year (FY) 13, CCSI announced the initial release of its first set of computational tools and models during the October 2012 meeting of its Industry Advisory Board. This initial release led to five companies licensing the CCSI Toolset under a Test and Evaluation Agreement this year. By the end of FY13, the CCSI Technical Team had completed development of an updated suite of computational tools and models. The list below summarizes the new and enhanced toolset components that were released following comprehensive testing during October 2013. 1. FOQUS. Framework for Optimization and Quantification of Uncertainty and Sensitivity. Package includes: FOQUS Graphic User Interface (GUI), simulation-based optimization engine, Turbine Client, and heat integration capabilities. There is also an updated simulation interface and new configuration GUI for connecting Aspen Plus or Aspen Custom Modeler (ACM) simulations to FOQUS and the Turbine Science Gateway. 2. A new MFIX-based Computational Fluid Dynamics (CFD) model to predict particle attrition. 3. A new dynamic reduced model (RM) builder, which generates computationally efficient RMs of the behavior of a dynamic system. 4. A completely re-written version of the algebraic surrogate model builder for optimization (ALAMO). The new version is several orders of magnitude faster than the initial release and eliminates the MATLAB dependency. 5. A new suite of high resolution filtered models for the hydrodynamics associated with horizontal cylindrical objects in a flow path. 6. The new Turbine Science Gateway (Cluster), which supports FOQUS for running multiple simulations for optimization or UQ using a local computer or cluster. 7. A new statistical tool (BSS-ANOVA-UQ) for calibration and validation of CFD models. 8. A new basic data submodel in Aspen Plus format for a representative high viscosity capture solvent, 2-MPZ system. 9. An updated RM tool for CFD (REVEAL) that can create a RM from MFIX. A new lightweight, stand-alone version will be available in late 2013. 10. An updated RM integration tool to convert the RM from REVEAL into a CAPE-OPEN or ACM model for use in a process simulator. 11. An updated suite of unified steady-state and dynamic process models for solid sorbent carbon capture included bubbling fluidized bed and moving bed reactors. 12. An updated and unified set of compressor models including steady-state design point model and dynamic model with surge detection. 13. A new framework for the synthesis and optimization of coal oxycombustion power plants using advanced optimization algorithms. This release focuses on modeling and optimization of a cryogenic air separation unit (ASU). 14. A new technical risk model in spreadsheet format. 15. An updated version of the sorbent kinetic/equilibrium model for parameter estimation for the 1st generation sorbent model. 16. An updated process synthesis superstructure model to determine optimal process configurations utilizing surrogate models from ALAMO for adsorption and regeneration in a solid sorbent process. 17. Validation models for NETL Carbon Capture Unit utilizing sorbent AX. Additional validation models will be available for sorbent 32D in 2014. 18. An updated hollow fiber membrane model and system example for carbon capture. 19. An updated reference power plant model in Thermoflex that includes additional steam extraction and reinjection points to enable heat integration module. 20. An updated financial risk model in spreadsheet format.« less

  10. Traffic Flow Management: Data Mining Update

    NASA Technical Reports Server (NTRS)

    Grabbe, Shon R.

    2012-01-01

    This presentation provides an update on recent data mining efforts that have been designed to (1) identify like/similar days in the national airspace system, (2) cluster/aggregate national-level rerouting data and (3) apply machine learning techniques to predict when Ground Delay Programs are required at a weather-impacted airport

  11. Summary: Update to ASTM Guide E 1523 to Charge Control and Charge Referencing Techniques in X-ray Photoelectron Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, Donald R.

    2005-04-22

    An updated version of the ASTM guide E1523 to the methods to charge control and charge referencing techniques in x-ray photoelectron spectroscopy has been released by ASTM. The guide is meant to acquaint x-ray photoelectron spectroscopy (XPS) users with the various charge control and charge referencing techniques that are and have been used in the acquisition and interpretation of XPS data from surfaces of insulating specimens. The current guide has been expanded to include new references as well as recommendations for reporting information on charge control and charge referencing. The previous version of the document had been published in 1997.

  12. Utilizing Flight Data to Update Aeroelastic Stability Estimates

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    Stability analysis of high performance aircraft must account for errors in the system model. A method for computing flutter margins that incorporates flight data has been developed using robust stability theory. This paper considers applying this method to update flutter margins during a post-flight or on-line analysis. Areas of modeling uncertainty that arise when using flight data with this method are investigated. The amount of conservatism in the resulting flutter margins depends on the flight data sets used to update the model. Post-flight updates of flutter margins for an F/A-18 are presented along with a simulation of on-line updates during a flight test.

  13. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Final Report, Version 2)

    EPA Science Inventory

    EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...

  14. A reduced order, test verified component mode synthesis approach for system modeling applications

    NASA Astrophysics Data System (ADS)

    Butland, Adam; Avitabile, Peter

    2010-05-01

    Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.

  15. Quantitative Measures for Evaluation of Ultrasound Therapies of the Prostate

    NASA Astrophysics Data System (ADS)

    Kobelevskiy, Ilya; Burtnyk, Mathieu; Bronskill, Michael; Chopra, Rajiv

    2010-03-01

    Development of non-invasive techniques for prostate cancer treatment requires implementation of quantitative measures for evaluation of the treatment results. In this paper. we introduce measures that estimate spatial targeting accuracy and potential thermal damage to the structures surrounding the prostate. The measures were developed for the technique of treating prostate cancer with a transurethral ultrasound heating applicators guided by active MR temperature feedback. Variations of ultrasound element length and related MR imaging parameters such as MR slice thickness and update time were investigated by performing numerical simulations of the treatment on a database of ten patient prostate geometries segmented from clinical MR images. Susceptibility of each parameter configuration to uncertainty in MR temperature measurements was studied by adding noise to the temperature measurements. Gaussian noise with zero mean and standard deviation of 0, 1, 3 and 5° C was used to model different levels of uncertainty in MR temperature measurements. Results of simulations for each parameter configuration were averaged over the database of the ten prostate patient geometries studied. Results have shown that for update time of 5 seconds both 3- and 5-mm elements achieve appropriate performance for temperature uncertainty up to 3° C, while temperature uncertainty of 5° C leads to noticeable reduction in spatial accuracy and increased risk of damaging rectal wall. Ten-mm elements lacked spatial accuracy and had higher risk of damaging rectal wall compared to 3- and 5-mm elements, but were less sensitive to the level of temperature uncertainty. The effect of changing update time was studied for 5-mm elements. Simulations showed that update time had minor effects on all aspects of treatment for temperature uncertainty of 0° C and 1° C, while temperature uncertainties of 3° C and 5° C led to reduced spatial accuracy, increased potential damage to the rectal wall, and longer treatment times for update time above 5 seconds. Overall evaluation of results suggested that 5-mm elements showed best performance under physically reachable MR imaging parameters.

  16. Application of Artificial Intelligence for Bridge Deterioration Model.

    PubMed

    Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.

  17. Application of Artificial Intelligence for Bridge Deterioration Model

    PubMed Central

    Chen, Zhang; Wu, Yangyang; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121

  18. Elastic Velocity Updating through Image-Domain Tomographic Inversion of Passive Seismic Data

    NASA Astrophysics Data System (ADS)

    Witten, B.; Shragge, J. C.

    2014-12-01

    Seismic monitoring at injection sites (e.g., CO2sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits images of the earthquake source using various imaging conditions based upon the P- and S-wavefield data. We generate image volumes by back propagating data through initial models and then applying a correlation-based imaging condition. We use the P-wavefield autocorrelation, S-wavefield autocorrelation, and P-S wavefield cross-correlation images. Inconsistencies in the images form the residuals, which are used to update the P- and S-wave velocity models through adjoint-state tomography. Because the image volumes are constructed from all trace data, the signal-to-noise in this space is increased when compared to the individual traces. Moreover, it eliminates the need for picking and does not require any estimation of the source location and timing. Initial tests show that with reasonable source distribution and acquisition array, velocity anomalies can be recovered. Future tests will apply this methodology to other scales from laboratory to global.

  19. GEOCAB Portal: A gateway for discovering and accessing capacity building resources in Earth Observation

    NASA Astrophysics Data System (ADS)

    Desconnets, Jean-Christophe; Giuliani, Gregory; Guigoz, Yaniss; Lacroix, Pierre; Mlisa, Andiswa; Noort, Mark; Ray, Nicolas; Searby, Nancy D.

    2017-02-01

    The discovery of and access to capacity building resources are often essential to conduct environmental projects based on Earth Observation (EO) resources, whether they are Earth Observation products, methodological tools, techniques, organizations that impart training in these techniques or even projects that have shown practical achievements. Recognizing this opportunity and need, the European Commission through two FP7 projects jointly with the Group on Earth Observations (GEO) teamed up with the Committee on Earth observation Satellites (CEOS). The Global Earth Observation CApacity Building (GEOCAB) portal aims at compiling all current capacity building efforts on the use of EO data for societal benefits into an easily updateable and user-friendly portal. GEOCAB offers a faceted search to improve user discovery experience with a fully interactive world map with all inventoried projects and activities. This paper focuses on the conceptual framework used to implement the underlying platform. An ISO19115 metadata model associated with a terminological repository are the core elements that provide a semantic search application and an interoperable discovery service. The organization and the contribution of different user communities to ensure the management and the update of the content of GEOCAB are addressed.

  20. Overview and Evaluation of the Community Multiscale Air ...

    EPA Pesticide Factsheets

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In late 2016 or early 2017, CMAQ version 5.2 will be released. This new version of CMAQ will contain important updates from the current CMAQv5.1 modeling system, along with several instrumented versions of the model (e.g. decoupled direct method and sulfur tracking). Some specific model updates include the implementation of a new wind-blown dust treatment in CMAQv5.2, a significant improvement over the treatment in v5.1 which can severely overestimate wind-blown dust under certain conditions. Several other major updates to the modeling system include an update to the calculation of aerosols; implementation of full halogen chemistry (CMAQv5.1 contains a partial implementation of halogen chemistry); the new carbon bond 6 (CB6) chemical mechanism; updates to cloud model in CMAQ; and a new lightning assimilation scheme for the WRF model which significant improves the placement and timing of convective precipitation in the WRF precipitation fields. Numerous other updates to the modeling system will also be available in v5.2.

  1. Modeling Amorphous Microporous Polymers for CO2 Capture and Separations.

    PubMed

    Kupgan, Grit; Abbott, Lauren J; Hart, Kyle E; Colina, Coray M

    2018-06-13

    This review concentrates on the advances of atomistic molecular simulations to design and evaluate amorphous microporous polymeric materials for CO 2 capture and separations. A description of atomistic molecular simulations is provided, including simulation techniques, structural generation approaches, relaxation and equilibration methodologies, and considerations needed for validation of simulated samples. The review provides general guidelines and a comprehensive update of the recent literature (since 2007) to promote the acceleration of the discovery and screening of amorphous microporous polymers for CO 2 capture and separation processes.

  2. Learn-as-you-go acceleration of cosmological parameter estimates

    NASA Astrophysics Data System (ADS)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C.

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitly describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.

  3. Learn-as-you-go acceleration of cosmological parameter estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aslanyan, Grigor; Easther, Richard; Price, Layne C., E-mail: g.aslanyan@auckland.ac.nz, E-mail: r.easther@auckland.ac.nz, E-mail: lpri691@aucklanduni.ac.nz

    2015-09-01

    Cosmological analyses can be accelerated by approximating slow calculations using a training set, which is either precomputed or generated dynamically. However, this approach is only safe if the approximations are well understood and controlled. This paper surveys issues associated with the use of machine-learning based emulation strategies for accelerating cosmological parameter estimation. We describe a learn-as-you-go algorithm that is implemented in the Cosmo++ code and (1) trains the emulator while simultaneously estimating posterior probabilities; (2) identifies unreliable estimates, computing the exact numerical likelihoods if necessary; and (3) progressively learns and updates the error model as the calculation progresses. We explicitlymore » describe and model the emulation error and show how this can be propagated into the posterior probabilities. We apply these techniques to the Planck likelihood and the calculation of ΛCDM posterior probabilities. The computation is significantly accelerated without a pre-defined training set and uncertainties in the posterior probabilities are subdominant to statistical fluctuations. We have obtained a speedup factor of 6.5 for Metropolis-Hastings and 3.5 for nested sampling. Finally, we discuss the general requirements for a credible error model and show how to update them on-the-fly.« less

  4. SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating

    PubMed Central

    Lee, Young-Joo; Cho, Soojin

    2016-01-01

    Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125

  5. Integration of Narrative Processing, Data Fusion, and Database Updating Techniques in an Automated System.

    DTIC Science & Technology

    1981-10-29

    are implemented, respectively, in the files "W-Update," "W-combine" and RW-Copy," listed in the appendix. The appendix begins with a typescript of an...the typescript ) and the copying process (steps 45 and 46) are shown as human actions in the typescript , but can be performed easily by a "master...for Natural Language, M. Marcus, MIT Press, 1980. I 29 APPENDIX: DATABASE UPDATING EXPERIMENT 30 CONTENTS Typescript of an experiment in Rosie

  6. Normal response function method for mass and stiffness matrix updating using complex FRFs

    NASA Astrophysics Data System (ADS)

    Pradhan, S.; Modak, S. V.

    2012-10-01

    Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.

  7. Modal-space reference-model-tracking fuzzy control of earthquake excited structures

    NASA Astrophysics Data System (ADS)

    Park, Kwan-Soon; Ok, Seung-Yong

    2015-01-01

    This paper describes an adaptive modal-space reference-model-tracking fuzzy control technique for the vibration control of earthquake-excited structures. In the proposed approach, the fuzzy logic is introduced to update optimal control force so that the controlled structural response can track the desired response of a reference model. For easy and practical implementation, the reference model is constructed by assigning the target damping ratios to the first few dominant modes in modal space. The numerical simulation results demonstrate that the proposed approach successfully achieves not only the adaptive fault-tolerant control system against partial actuator failures but also the robust performance against the variations of the uncertain system properties by redistributing the feedback control forces to the available actuators.

  8. Etude experimentale et modelisation de la digestion anaerobie des matieres organiques residuelles dans des conditions hyperthermophiles =

    NASA Astrophysics Data System (ADS)

    Altamirano, Felipe Ignacio Castro

    This dissertation focuses on the problem of designing rates in the utility sector. It is motivated by recent developments in the electricity industry, where renewable generation technologies and distributed energy resources are becoming increasingly relevant. Both technologies disrupt the sector in unique ways. While renewables make grid operations more complex, and potentially more expensive, distributed energy resources enable consumers to interact two-ways with the grid. Both developments present challenges and opportunities for regulators, who must adapt their techniques for evaluating policies to the emerging technological conditions. The first two chapters of this work make the case for updating existing techniques to evaluate tariff structures. They also propose new methods which are more appropriate given the prospective technological characteristics of the sector. The first chapter constructs an analytic tool based on a model that captures the interaction between pricing and investment. In contrast to previous approaches, this technique allows consistently comparing portfolios of rates while enabling researchers to model with a significantly greater level of detail the supply side of the sector. A key theoretical implication of the model that underlies this technique is that, by properly updating the portfolio of tariffs, a regulator could induce the welfare maximizing adoption of distributed energy resources and enrollment in rate structures. We develop an algorithm to find globally optimal solutions of this model, which is a nonlinear mathematical program. The results of a computational experiment show that the performance of the algorithm dominates that of commercial nonlinear solvers. In addition, to illustrate the practical relevance of the method, we conduct a cost benefit analysis of implementing time-variant tariffs in two electricity systems, California and Denmark. Although portfolios with time-varying rates create value in both systems, these improvements differ enough to advise very different policies. While in Denmark time-varying tariffs appear unattractive, they at least deserve further revision in California. This conclusion is beyond the reach of previous techniques to analyze rates, as they do not capture the interplay between an intermittent supply and a price-responsive demand. While useful, the method we develop in the first chapter has two important limitations. One is the lack of transparency of the parameters that determine demand substitution patterns, and demand heterogeneity; the other is the narrow range of rate structures that could be studied with the technique. Both limitations stem from taking as a primitive a demand function. Following an alternative path, in the second chapter we develop a technique based on a pricing model that has as a fundamental building block the consumer utility maximization problem. Because researchers do not have to limit themselves to problems with unique solutions, this approach significantly increases the flexibility of the model and, in particular, addresses the limitations of the technique we develop in the first chapter. This gain in flexibility decreases the practicality of our method since the underlying model becomes a Bilevel Problem. To be able to handle realistic instances, we develop a decomposition method based on a non-linear variant of the Alternating Direction Method of Multipliers, which combines Conic and Mixed Integer Programming. A numerical experiment shows that the performance of the solution technique is robust to instance sizes and a wide combination of parameters. We illustrate the relevance of the new method with another applied analysis of rate structures. Our results highlight the value of being able to model in detail distributed energy resources. They also show that ignoring transmission constraints can have meaningful impacts on the analysis of rate structures. In addition, we conduct a distributional analysis, which portrays how our method permits regulators and policy makers to study impacts of a rate update on a heterogeneous population. While a switch in rates could have a positive impact on the aggregate of households, it could benefit some more than others, and even harm some customers. Our technique permits to anticipate these impacts, letting regulators decide among rate structures with considerably more information than what would be available with alternative approaches. In the third chapter, we conduct an empirical analysis of rate structures in California, which is currently undergoing a rate reform. To contribute to the ongoing regulatory debate about the future of rates, we analyze in depth a set of plausible tariff alternatives. In our analysis, we focus on a scenario in which advanced metering infrastructure and home energy management systems are widely adopted. Our modeling approach allows us to capture a wide variety of temporal and spatial demand substitution patterns without the need of estimating a large number of parameters. (Abstract shortened by ProQuest.).

  9. Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale

    NASA Astrophysics Data System (ADS)

    Shragge, J.; Witten, B.

    2017-12-01

    Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.

  10. Characterization of Oribtal Debris via Hyper-Velocity Ground-Based Tests

    NASA Technical Reports Server (NTRS)

    Cowardin, H.

    2015-01-01

    Existing DoD and NASA satellite breakup models are based on a key laboratory-based test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve the break-up models and the NASA Size Estimation Model (SEM) for events involving more modern satellite designs, the NASA Orbital Debris Program Office has worked in collaboration with the University of Florida to replicate a hypervelocity impact using a satellite built with modern-day spacecraft materials and construction techniques. The spacecraft, called DebriSat, was intended to be a representative of modern LEO satellites and all major designs decisions were reviewed and approved by subject matter experts at Aerospace Corporation. DebriSat is composed of 7 major subsystems including attitude determination and control system (ADCS), command and data handling (C&DH), electrical power system (EPS), payload, propulsion, telemetry tracking and command (TT&C), and thermal management. To reduce cost, most components are emulated based on existing design of flight hardware and fabricated with the same materials. All fragments down to 2 mm is size will be characterized via material, size, shape, bulk density, and the associated data will be stored in a database for multiple users to access. Laboratory radar and optical measurements will be performed on a subset of fragments to provide a better understanding of the data products from orbital debris acquired from ground-based radars and telescopes. The resulting data analysis from DebriSat will be used to update break-up models and develop the first optical SEM in conjunction with updates into the current NASA SEM. The characterization of the fragmentation will be discussed in the subsequent presentation.

  11. Adapting to change: The role of the right hemisphere in mental model building and updating.

    PubMed

    Filipowicz, Alex; Anderson, Britt; Danckert, James

    2016-09-01

    We recently proposed that the right hemisphere plays a crucial role in the processes underlying mental model building and updating. Here, we review the evidence we and others have garnered to support this novel account of right hemisphere function. We begin by presenting evidence from patient work that suggests a critical role for the right hemisphere in the ability to learn from the statistics in the environment (model building) and adapt to environmental change (model updating). We then provide a review of neuroimaging research that highlights a network of brain regions involved in mental model updating. Next, we outline specific roles for particular regions within the network such that the anterior insula is purported to maintain the current model of the environment, the medial prefrontal cortex determines when to explore new or alternative models, and the inferior parietal lobule represents salient and surprising information with respect to the current model. We conclude by proposing some future directions that address some of the outstanding questions in the field of mental model building and updating. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. The 2006 Cape Canaveral Air Force Station Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the National Aeronautics and Space Administration's Space Shuttle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian

    2008-01-01

    The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  13. A Proposed Multimedia Cone of Abstraction: Updating a Classic Instructional Design Theory

    ERIC Educational Resources Information Center

    Baukal, Charles E.; Ausburn, Floyd B.; Ausburn, Lynna J.

    2013-01-01

    Advanced multimedia techniques offer significant learning potential for students. Dale (1946, 1954, 1969) developed a Cone of Experience (CoE) which is a hierarchy of learning experiences ranging from direct participation to abstract symbolic expression. This paper updates the CoE for today's technology and learning context, specifically focused…

  14. Artificial Boundary Conditions for Finite Element Model Update and Damage Detection

    DTIC Science & Technology

    2017-03-01

    BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION by Emmanouil Damanakis March 2017 Thesis Advisor: Joshua H. Gordis...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ARTIFICIAL BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION...release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In structural engineering, a finite element model is often

  15. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  16. An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating

    NASA Astrophysics Data System (ADS)

    Ratcliffe, M. J.; Lieven, N. A. J.

    1999-03-01

    Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.

  17. Multi-GPU Acceleration of Branchless Distance Driven Projection and Backprojection for Clinical Helical CT.

    PubMed

    Mitra, Ayan; Politte, David G; Whiting, Bruce R; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2017-01-01

    Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.

  18. Techniques of orbital decay and long-term ephemeris prediction for satellites in earth orbit

    NASA Technical Reports Server (NTRS)

    Barry, B. F.; Pimm, R. S.; Rowe, C. K.

    1971-01-01

    In the special perturbation method, Cowell and variation-of-parameters formulations of the motion equations are implemented and numerically integrated. Variations in the orbital elements due to drag are computed using the 1970 Jacchia atmospheric density model, which includes the effects of semiannual variations, diurnal bulge, solar activity, and geomagnetic activity. In the general perturbation method, two-variable asymptotic series and automated manipulation capabilities are used to obtain analytical solutions to the variation-of-parameters equations. Solutions are obtained considering the effect of oblateness only and the combined effects of oblateness and drag. These solutions are then numerically evaluated by means of a FORTRAN program in which an updating scheme is used to maintain accurate epoch values of the elements. The atmospheric density function is approximated by a Fourier series in true anomaly, and the 1970 Jacchia model is used to periodically update the Fourier coefficients. The accuracy of both methods is demonstrated by comparing computed orbital elements to actual elements over time spans of up to 8 days for the special perturbation method and up to 356 days for the general perturbation method.

  19. Optimization Model for Web Based Multimodal Interactive Simulations.

    PubMed

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  20. Optimization Model for Web Based Multimodal Interactive Simulations

    PubMed Central

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-01-01

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update. In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach. PMID:26085713

  1. SCADA-based Operator Support System for Power Plant Equipment Fault Forecasting

    NASA Astrophysics Data System (ADS)

    Mayadevi, N.; Ushakumari, S. S.; Vinodchandra, S. S.

    2014-12-01

    Power plant equipment must be monitored closely to prevent failures from disrupting plant availability. Online monitoring technology integrated with hybrid forecasting techniques can be used to prevent plant equipment faults. A self learning rule-based expert system is proposed in this paper for fault forecasting in power plants controlled by supervisory control and data acquisition (SCADA) system. Self-learning utilizes associative data mining algorithms on the SCADA history database to form new rules that can dynamically update the knowledge base of the rule-based expert system. In this study, a number of popular associative learning algorithms are considered for rule formation. Data mining results show that the Tertius algorithm is best suited for developing a learning engine for power plants. For real-time monitoring of the plant condition, graphical models are constructed by K-means clustering. To build a time-series forecasting model, a multi layer preceptron (MLP) is used. Once created, the models are updated in the model library to provide an adaptive environment for the proposed system. Graphical user interface (GUI) illustrates the variation of all sensor values affecting a particular alarm/fault, as well as the step-by-step procedure for avoiding critical situations and consequent plant shutdown. The forecasting performance is evaluated by computing the mean absolute error and root mean square error of the predictions.

  2. Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Jarzabek-Rychard, M.; Karpina, M.

    2016-06-01

    Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  3. Novel flood risk assessment framework for rapid decision making

    NASA Astrophysics Data System (ADS)

    Valyrakis, Manousos; Koursari, Eftychia; Solley, Mark

    2016-04-01

    The impacts of catastrophic flooding, have significantly increased over the last few decades. This is due to primarily the increased urbanisation in ever-expanding mega-cities as well as due to the intensification both in magnitude and frequency of extreme hydrologic events. Herein a novel conceptual framework is presented that incorporates the use of real-time information to inform and update low dimensionality hydraulic models, to allow for rapid decision making towards preventing loss of life and safeguarding critical infrastructure. In particular, a case study from the recent UK floods in the area of Whitesands (Dumfries), is presented to demonstrate the utility of this approach. It is demonstrated that effectively combining a wealth of readily available qualitative information (such as crowdsourced visual documentation or using live data from sensing techniques), with existing quantitative data, can help appropriately update hydraulic models and reduce modelling uncertainties in future flood risk assessments. This approach is even more useful in cases where hydraulic models are limited, do not exist or were not needed before unpredicted dynamic modifications to the river system took place (for example in the case of reduced or eliminated hydraulic capacity due to blockages). The low computational cost and rapid assessment this framework offers, render it promising for innovating in flood management.

  4. The VEPSY UPDATED Project: clinical rationale and technical approach.

    PubMed

    Riva, G; Alcãniz, M; Anolli, L; Bacchetta, M; Baños, R; Buselli, C; Beltrame, F; Botella, C; Castelnuovo, G; Cesa, G; Conti, S; Galimberti, C; Gamberini, L; Gaggioli, A; Klinger, E; Legeron, P; Mantovani, F; Mantovani, G; Molinari, E; Optale, G; Ricciardiello, L; Perpiñá, C; Roy, S; Spagnolli, A; Troiani, R; Weddle, C

    2003-08-01

    More than 10 years ago, Tart (1990) described virtual reality (VR) as a technological model of consciousness offering intriguing possibilities for developing diagnostic, inductive, psychotherapeutic, and training techniques that can extend and supplement current ones. To exploit and understand this potential is the overall goal of the "Telemedicine and Portable Virtual Environment in Clinical Psychology"--VEPSY UPDATED--a European Community-funded research project (IST-2000-25323, www.cybertherapy.info). Particularly, its specific goal is the development of different PC-based virtual reality modules to be used in clinical assessment and treatment of social phobia, panic disorders, male sexual disorders, obesity, and eating disorders. The paper describes the clinical and technical rationale behind the clinical applications developed by the project. Moreover, the paper focuses its analysis on the possible role of VR in clinical psychology and how it can be used for therapeutic change.

  5. Adaptive Event-Triggered Control Based on Heuristic Dynamic Programming for Nonlinear Discrete-Time Systems.

    PubMed

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2017-07-01

    This paper presents the design of a novel adaptive event-triggered control method based on the heuristic dynamic programming (HDP) technique for nonlinear discrete-time systems with unknown system dynamics. In the proposed method, the control law is only updated when the event-triggered condition is violated. Compared with the periodic updates in the traditional adaptive dynamic programming (ADP) control, the proposed method can reduce the computation and transmission cost. An actor-critic framework is used to learn the optimal event-triggered control law and the value function. Furthermore, a model network is designed to estimate the system state vector. The main contribution of this paper is to design a new trigger threshold for discrete-time systems. A detailed Lyapunov stability analysis shows that our proposed event-triggered controller can asymptotically stabilize the discrete-time systems. Finally, we test our method on two different discrete-time systems, and the simulation results are included.

  6. The New Italian Seismic Hazard Model

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.

    2017-12-01

    In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme of the different components of the PSHA model that has been built through three different independent steps: a formal experts' elicitation, the outcomes of the testing phase, and the correlation between the outcomes. Finally, we explore through different techniques the influence on seismic hazard of the declustering procedure.

  7. An Update of the Bodeker Scientific Vertically Resolved, Global, Gap-Free Ozone Database

    NASA Astrophysics Data System (ADS)

    Kremser, S.; Bodeker, G. E.; Lewis, J.; Hassler, B.

    2016-12-01

    High vertical resolution ozone measurements from multiple satellite-based instruments have been merged with measurements from the global ozonesonde network to calculate monthly mean ozone values in 5º latitude zones. Ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km) and on 70 pressure levels spaced approximately 1 km apart (878.4 hPa to 0.046 hPa). These data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to these data and then evaluated globally. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. This presentation reports on updates to an earlier version of the vertically resolved ozone database, including the incorporation of new ozone measurements and new techniques for combining the data. Compared to previous versions of the database, particular attention is paid to avoiding spatial and temporal sampling biases and tracing uncertainties through to the final product. This updated database, developed within the New Zealand Deep South National Science Challenge, is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not treat stratospheric chemistry interactively.

  8. A new frequency matching technique for FRF-based model updating

    NASA Astrophysics Data System (ADS)

    Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng

    2017-05-01

    Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.

  9. Adaptation of clinical prediction models for application in local settings.

    PubMed

    Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M

    2012-01-01

    When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.

  10. Ecological Census Techniques - 2nd Edition

    NASA Astrophysics Data System (ADS)

    Sutherland, Edited By William J.

    2006-08-01

    This is an updated version of the best selling first edition, Ecological Census Techniques, with updating, some new chapters and authors. Almost all ecological and conservation work involves carrying out a census or survey. This practically focussed book describes how to plan a census, the practical details and shows with worked examples how to analyse the results. The first three chapters describe planning, sampling and the basic theory necessary for carrying out a census. In the subsequent chapters international experts describe the appropriate methods for counting plants, insects, fish, amphibians, reptiles, mammals and birds. As many censuses also relate the results to environmental variability, there is a chapter explaining the main methods. Finally, there is a list of the most common mistakes encountered when carrying out a census. Gives worked examples and describes practical details The chapter on research planning provides an approach for planning any research, not just those relating to census techniques Latest edition of a very highly-regarded book. Includes new authors, each chapter has been updated, and additional chapters on sampling and designing research programmes have been added

  11. A review and update of the Virginia Department of Transportation cash flow forecasting model.

    DOT National Transportation Integrated Search

    1996-01-01

    This report details the research done to review and update components of the VDOT cash flow forecasting model. Specifically, the study updated the monthly factors submodel used to predict payments on construction contracts. For the other submodel rev...

  12. Assimilation of GOES Land Surface Data Within a Rapid Update Cycle Format: Impact on MM5 Warm Season QPF

    NASA Technical Reports Server (NTRS)

    Lapenta, William M.; Suggs, Ron; Jedlovec, Gary; McNider, Richard T.; Dembek, Scott; Arnold, James E. (Technical Monitor)

    2001-01-01

    A technique has been developed for assimilating GOES-derived skin temperature tendencies and insolation into the surface energy budget equation of a mesoscale model so that the simulated rate of temperature change closely agrees with the satellite observations. A critical assumption of the technique is that the availability of moisture (either from the soil or vegetation) is the least known term in the model's surface energy budget. Therefore, the simulated latent heat flux, which is a function of surface moisture availability, is adjusted based upon differences between the modeled and satellite-observed skin temperature tendencies. An advantage of this technique is that satellite temperature tendencies are assimilated in an energetically consistent manner that avoids energy imbalances and surface stability problems that arise from direct assimilation of surface shelter temperatures. The fact that the rate of change of the satellite skin temperature is used rather than the absolute temperature means that sensor calibration is not as critical. The focus of this paper is to examine how the satellite assimilation technique impacts simulations of near-surface meteorology on the 0-to 12-hour time scale when implemented within a local rapid update cycle (LRUC) format. The PSU/NCAR MM5 V34 is used and configured with a 36-km CONUS domain and a 12-km nest centered over the southeastern US. The LRUC format consists of a sequence of 12-hour forecasts initialized every hour between 12 and 18 UTC seven days a week. GOES skin temperature tendencies and solar insolation are assimilated in a 1-hour period prior to the start of each twelve-hour forecast. A unique aspect of the LRUC is the satellite assimilation and the continuous recycling of the adjusted moisture availability field from one forecast cycle to the next. Preliminary results for a seven-day trial period indicate that hourly LST tendencies assimilated in a 1 hour LRUC showed improved simulated air and dewpoint temperatures for all cycles on each day. The LRUC will be used during the 2001 summer months to identify the impact of the assimilation on warm season QPF Results will be presented at the meeting.

  13. General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagwell, L.; Bennett, P.; Flach, G.

    2017-02-21

    This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).

  14. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  15. Test and analysis procedures for updating math models of Space Shuttle payloads

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.

    1991-01-01

    Over the next decade or more, the Space Shuttle will continue to be the primary transportation system for delivering payloads to Earth orbit. Although a number of payloads have already been successfully carried by the Space Shuttle in the payload bay of the Orbiter vehicle, there continues to be a need for evaluation of the procedures used for verifying and updating the math models of the payloads. The verified payload math models is combined with an Orbiter math model for the coupled-loads analysis, which is required before any payload can fly. Several test procedures were employed for obtaining data for use in verifying payload math models and for carrying out the updating of the payload math models. Research was directed at the evaluation of test/update procedures for use in the verification of Space Shuttle payload math models. The following research tasks are summarized: (1) a study of free-interface test procedures; (2) a literature survey and evaluation of model update procedures; and (3) the design and construction of a laboratory payload simulator.

  16. Mourning dove hunting regulation strategy based on annual harvest statistics and banding data

    USGS Publications Warehouse

    Otis, D.L.

    2006-01-01

    Although managers should strive to base game bird harvest management strategies on mechanistic population models, monitoring programs required to build and continuously update these models may not be in place. Alternatively, If estimates of total harvest and harvest rates are available, then population estimates derived from these harvest data can serve as the basis for making hunting regulation decisions based on population growth rates derived from these estimates. I present a statistically rigorous approach for regulation decision-making using a hypothesis-testing framework and an assumed framework of 3 hunting regulation alternatives. I illustrate and evaluate the technique with historical data on the mid-continent mallard (Anas platyrhynchos) population. I evaluate the statistical properties of the hypothesis-testing framework using the best available data on mourning doves (Zenaida macroura). I use these results to discuss practical implementation of the technique as an interim harvest strategy for mourning doves until reliable mechanistic population models and associated monitoring programs are developed.

  17. Build-up Approach to Updating the Mock Quiet Spike(TradeMark) Beam Model

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia Y.; Pak, Chan-gi

    2007-01-01

    A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike(TradeMark) (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented in order to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike(TradeMark) project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.

  18. Build-up Approach to Updating the Mock Quiet Spike(TM)Beam Model

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia Y.; Pak, Chan-gi

    2007-01-01

    A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. The NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.

  19. Updates to the Demographic and Spatial Allocation Models to ...

    EPA Pesticide Factsheets

    EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change modeling by providing nationwide housing development scenarios up to 2100. ICLUS V2 includes updated population and land use data sets and addressing limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. [2017 UPDATE] Get the latest version of ICLUS and stay up-to-date by signing up to the ICLUS mailing list. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.

  20. BOADICEA breast cancer risk prediction model: updates to cancer incidences, tumour pathology and web interface

    PubMed Central

    Lee, A J; Cunningham, A P; Kuchenbaecker, K B; Mavaddat, N; Easton, D F; Antoniou, A C

    2014-01-01

    Background: The Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) is a risk prediction model that is used to compute probabilities of carrying mutations in the high-risk breast and ovarian cancer susceptibility genes BRCA1 and BRCA2, and to estimate the future risks of developing breast or ovarian cancer. In this paper, we describe updates to the BOADICEA model that extend its capabilities, make it easier to use in a clinical setting and yield more accurate predictions. Methods: We describe: (1) updates to the statistical model to include cancer incidences from multiple populations; (2) updates to the distributions of tumour pathology characteristics using new data on BRCA1 and BRCA2 mutation carriers and women with breast cancer from the general population; (3) improvements to the computational efficiency of the algorithm so that risk calculations now run substantially faster; and (4) updates to the model's web interface to accommodate these new features and to make it easier to use in a clinical setting. Results: We present results derived using the updated model, and demonstrate that the changes have a significant impact on risk predictions. Conclusion: All updates have been implemented in a new version of the BOADICEA web interface that is now available for general use: http://ccge.medschl.cam.ac.uk/boadicea/. PMID:24346285

  1. A Modified Normalization Technique for Frequency-Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Hwang, J.; Jeong, G.; Min, D. J.; KIM, S.; Heo, J. Y.

    2016-12-01

    Full waveform inversion (FWI) is a technique to estimate subsurface material properties minimizing the misfit function built with residuals between field and modeled data. To achieve computational efficiency, FWI has been performed in the frequency domain by carrying out modeling in the frequency domain, whereas observed data (time-series data) are Fourier-transformed.One of the main drawbacks of seismic FWI is that it easily gets stuck in local minima because of lacking of low-frequency data. To compensate for this limitation, damped wavefields are used, as in the Laplace-domain waveform inversion. Using damped wavefield in FWI plays a role in generating low-frequency components and help recover long-wavelength structures. With these newly generated low-frequency components, we propose a modified frequency-normalization technique, which has an effect of boosting contribution of low-frequency components to model parameter update.In this study, we introduce the modified frequency-normalization technique which effectively amplifies low-frequency components of damped wavefields. Our method is demonstrated for synthetic data for the SEG/EAGE salt model. AcknowledgementsThis work was supported by the Korea Institute of Energy Technology Evaluation and Planning(KETEP) and the Ministry of Trade, Industry & Energy(MOTIE) of the Republic of Korea (No. 20168510030830) and by the Dual Use Technology Program, granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea.

  2. Belgian guidelines for economic evaluations: second edition.

    PubMed

    Thiry, Nancy; Neyt, Mattias; Van De Sande, Stefaan; Cleemput, Irina

    2014-12-01

    The aim of this study was to present the updated methodological guidelines for economic evaluations of healthcare interventions (drugs, medical devices, and other interventions) in Belgium. The update of the guidelines was performed by three Belgian health economists following feedback from users of the former guidelines and personal experience. The updated guidelines were discussed with a multidisciplinary team consisting of other health economists, assessors of reimbursement request files, representatives of Belgian databases and representatives of the drugs and medical devices industry. The final document was validated by three external validators that were not involved in the previous discussions. The guidelines give methodological guidance for the following components of an economic evaluation: literature review, perspective of the evaluation, definition of the target population, choice of the comparator, analytic technique and study design, calculation of costs, valuation of outcomes, definition of the time horizon, modeling, handling uncertainty and discounting. We present a reference case that can be considered as the minimal requirement for Belgian economic evaluations of health interventions. These guidelines will improve the methodological quality, transparency and uniformity of the economic evaluations performed in Belgium. The guidelines will also provide support to the researchers and assessors performing or evaluating economic evaluations.

  3. Development of updated specifications for roadway rehabilitation techniques.

    DOT National Transportation Integrated Search

    2011-05-01

    As our nations highway system continues to age, asphalt maintenance and rehabilitation techniques have become increasingly important. The deterioration of pavement over time is inevitable. Preventive maintenance is a strategy to extend the service...

  4. PID techniques: Alternatives to RICH methods

    DOE PAGES

    Va’vra, J.

    2017-07-05

    Here, in this review article we discuss new updates on PID techniques, other than the Cherenkov method. In particular, we discuss recent efforts to develop high resolution timing, placing an emphasis on small scale test results.

  5. Dissociable effects of surprise and model update in parietal and anterior cingulate cortex

    PubMed Central

    O’Reilly, Jill X.; Schüffelgen, Urs; Cuell, Steven F.; Behrens, Timothy E. J.; Mars, Rogier B.; Rushworth, Matthew F. S.

    2013-01-01

    Brains use predictive models to facilitate the processing of expected stimuli or planned actions. Under a predictive model, surprising (low probability) stimuli or actions necessitate the immediate reallocation of processing resources, but they can also signal the need to update the underlying predictive model to reflect changes in the environment. Surprise and updating are often correlated in experimental paradigms but are, in fact, distinct constructs that can be formally defined as the Shannon information (IS) and Kullback–Leibler divergence (DKL) associated with an observation. In a saccadic planning task, we observed that distinct behaviors and brain regions are associated with surprise/IS and updating/DKL. Although surprise/IS was associated with behavioral reprogramming as indexed by slower reaction times, as well as with activity in the posterior parietal cortex [human lateral intraparietal area (LIP)], the anterior cingulate cortex (ACC) was specifically activated during updating of the predictive model (DKL). A second saccade-sensitive region in the inferior posterior parietal cortex (human 7a), which has connections to both LIP and ACC, was activated by surprise and modulated by updating. Pupillometry revealed a further dissociation between surprise and updating with an early positive effect of surprise and late negative effect of updating on pupil area. These results give a computational account of the roles of the ACC and two parietal saccade regions, LIP and 7a, by which their involvement in diverse tasks can be understood mechanistically. The dissociation of functional roles between regions within the reorienting/reprogramming network may also inform models of neurological phenomena, such as extinction and Balint syndrome, and neglect. PMID:23986499

  6. An update on oral hygiene products and techniques.

    PubMed

    Laing, Emma; Ashley, Paul; Gill, Daljit; Naini, Farhad

    2008-05-01

    The aim of this article is to update the reader on oral hygiene products and techniques. The evidence relating to the range of toothbrushing, interdental cleaning products and chemotherapeutic agents currently on the market will be discussed. It will be seen that choice of many of the oral hygiene products currently on the market is still largely a matter of personal preference. An inadequate oral hygiene regime may lead to caries and periodontal disease. It is important for clinicians to be able to recommend a preventive programme for dental and periodontal health that is supported by high quality, evidence-based clinical research.

  7. Surgical treatment of osteoporotic fractures: An update on the principles of management.

    PubMed

    Yaacobi, Eyal; Sanchez, Daniela; Maniar, Hemil; Horwitz, Daniel S

    2017-12-01

    The treatment of osteoporotic fractures continues to challenge orthopedic surgeon. The fragility of the underlying bone in conjunction with the need for specific implants led to the development of explicit surgical techniques in order to minimize implant failure related complications, morbidity and mortality. From the patient's perspective, the existence of frailty, dementia and other medical related co-morbidities induce a complex situation necessitating high vigilance during the perioperative and post-operative period. This update reviews current principles and techniques essential to successful surgical treatment of these injuries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Chemical transport model simulations of organic aerosol in ...

    EPA Pesticide Factsheets

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data

  9. Insights on multivariate updates of physical and biogeochemical ocean variables using an Ensemble Kalman Filter and an idealized model of upwelling

    NASA Astrophysics Data System (ADS)

    Yu, Liuqian; Fennel, Katja; Bertino, Laurent; Gharamti, Mohamad El; Thompson, Keith R.

    2018-06-01

    Effective data assimilation methods for incorporating observations into marine biogeochemical models are required to improve hindcasts, nowcasts and forecasts of the ocean's biogeochemical state. Recent assimilation efforts have shown that updating model physics alone can degrade biogeochemical fields while only updating biogeochemical variables may not improve a model's predictive skill when the physical fields are inaccurate. Here we systematically investigate whether multivariate updates of physical and biogeochemical model states are superior to only updating either physical or biogeochemical variables. We conducted a series of twin experiments in an idealized ocean channel that experiences wind-driven upwelling. The forecast model was forced with biased wind stress and perturbed biogeochemical model parameters compared to the model run representing the "truth". Taking advantage of the multivariate nature of the deterministic Ensemble Kalman Filter (DEnKF), we assimilated different combinations of synthetic physical (sea surface height, sea surface temperature and temperature profiles) and biogeochemical (surface chlorophyll and nitrate profiles) observations. We show that when biogeochemical and physical properties are highly correlated (e.g., thermocline and nutricline), multivariate updates of both are essential for improving model skill and can be accomplished by assimilating either physical (e.g., temperature profiles) or biogeochemical (e.g., nutrient profiles) observations. In our idealized domain, the improvement is largely due to a better representation of nutrient upwelling, which results in a more accurate nutrient input into the euphotic zone. In contrast, assimilating surface chlorophyll improves the model state only slightly, because surface chlorophyll contains little information about the vertical density structure. We also show that a degradation of the correlation between observed subsurface temperature and nutrient fields, which has been an issue in several previous assimilation studies, can be reduced by multivariate updates of physical and biogeochemical fields.

  10. Model updating in flexible-link multibody systems

    NASA Astrophysics Data System (ADS)

    Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.

    2016-09-01

    The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.

  11. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    NASA Technical Reports Server (NTRS)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  12. Valence-Dependent Belief Updating: Computational Validation

    PubMed Central

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499

  13. Valence-Dependent Belief Updating: Computational Validation.

    PubMed

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.

  14. Attentional focus affects how events are segmented and updated in narrative reading.

    PubMed

    Bailey, Heather R; Kurby, Christopher A; Sargent, Jesse Q; Zacks, Jeffrey M

    2017-08-01

    Readers generate situation models representing described events, but the nature of these representations may differ depending on the reading goals. We assessed whether instructions to pay attention to different situational dimensions affect how individuals structure their situation models (Exp. 1) and how they update these models when situations change (Exp. 2). In Experiment 1, participants read and segmented narrative texts into events. Some readers were oriented to pay specific attention to characters or space. Sentences containing character or spatial-location changes were perceived as event boundaries-particularly if the reader was oriented to characters or space, respectively. In Experiment 2, participants read narratives and responded to recognition probes throughout the texts. Readers who were oriented to the spatial dimension were more likely to update their situation models at spatial changes; all readers tracked the character dimension. The results from both experiments indicated that attention to individual situational dimensions influences how readers segment and update their situation models. More broadly, the results provide evidence for a global situation model updating mechanism that serves to set up new models at important narrative changes.

  15. A comparison of several techniques for imputing tree level data

    Treesearch

    David Gartner

    2002-01-01

    As Forest Inventory and Analysis (FIA) changes from periodic surveys to the multipanel annual survey, new analytical methods become available. The current official statistic is the moving average. One alternative is an updated moving average. Several methods of updating plot per acre volume have been discussed previously. However, these methods may not be appropriate...

  16. Updating Assessment Styles: Website Development Rather than Report Writing for Project Based Learning Courses

    ERIC Educational Resources Information Center

    Brown, Nicola

    2017-01-01

    While teaching methods tend to be updated frequently, the implementation of new innovative assessment tools is much slower. For example project based learning has become popular as a teaching technique, however, the assessment tends to be via traditional reports. This paper reports on the implementation and evaluation of using website development…

  17. Evaluation of the groundwater-flow model for the Ohio River alluvial aquifer near Carrollton, Kentucky, updated to conditions in September 2010

    USGS Publications Warehouse

    Unthank, Michael D.

    2013-01-01

    The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.

  18. Computer animation challenges for computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Vines, Mauricio; Lee, Won-Sook; Mavriplis, Catherine

    2012-07-01

    Computer animation requirements differ from those of traditional computational fluid dynamics (CFD) investigations in that visual plausibility and rapid frame update rates trump physical accuracy. We present an overview of the main techniques for fluid simulation in computer animation, starting with Eulerian grid approaches, the Lattice Boltzmann method, Fourier transform techniques and Lagrangian particle introduction. Adaptive grid methods, precomputation of results for model reduction, parallelisation and computation on graphical processing units (GPUs) are reviewed in the context of accelerating simulation computations for animation. A survey of current specific approaches for the application of these techniques to the simulation of smoke, fire, water, bubbles, mixing, phase change and solid-fluid coupling is also included. Adding plausibility to results through particle introduction, turbulence detail and concentration on regions of interest by level set techniques has elevated the degree of accuracy and realism of recent animations. Basic approaches are described here. Techniques to control the simulation to produce a desired visual effect are also discussed. Finally, some references to rendering techniques and haptic applications are mentioned to provide the reader with a complete picture of the challenges of simulating fluids in computer animation.

  19. The influence of surface finishing methods on touch-sensitive reactions

    NASA Astrophysics Data System (ADS)

    Kukhta, M. S.; Sokolov, A. P.; Krauinsh, P. Y.; Kozlova, A. D.; Bouchard, C.

    2017-02-01

    This paper describes the modern technological development trends in jewelry design. In the jewelry industry, new trends, associated with the introduction of updated non-traditional materials and finishing techniques, are appearing. The existing information-oriented society enhances the visual aesthetics of new jewelry forms, decoration techniques (depth and surface), synthesis of different materials, which, all in all, reveal a bias towards positive effects of visual design. Today, the jewelry industry includes not only traditional techniques, but also such improved techniques as computer-assisted design, 3D-prototyping and other alternatives to produce an updated level of jewelry material processing. The authors present the specific features of ornamental pattern designing, decoration types (depth and surface) and comparative analysis of different approaches in surface finishing. Identifying the appearance or the effect of jewelry is based on proposed evaluation criteria, providing an advanced visual aesthetics basis is predicated on touch-sensitive responses.

  20. Performance evaluation of the atmospheric phase of aeromaneuvering orbital transfer vehicles

    NASA Technical Reports Server (NTRS)

    Powell, R. W.; Stone, H. W.; Naftel, J. C.

    1984-01-01

    Studies are underway to design reusable orbital transfer vehicles that would be used to transfer payloads from low-earth orbit to higher orbits and return. One promising concept is to use an atmospheric pass on the return leg to reduce the amount of fuel for the mission. This paper discusses a six-degree-of-freedom simulation analysis for two configurations, a low-lift-to-drag ratio configuration and a medium-lift-to-drag ratio configuration using both a predictive guidance technique and an adaptive guidance technique. Both guidance schemes were evaluated using the 1962 standard atmosphere and three atmospheres that had been derived from three entries of the Space Shuttle. The predictive technique requires less reaction control system activity for both configurations, but because of the limited number of updates and because each update used the 1962 standard atmosphere, the adaptive technique produces more accurate exit conditions.

  1. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less

  2. The 2006 Kennedy Space Center Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the Performance of the National Aeronautics and Space Administration's Space Shuttle Vehicle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl

    2008-01-01

    The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  3. 3D digital image correlation methods for full-field vibration measurement

    NASA Astrophysics Data System (ADS)

    Helfrick, Mark N.; Niezrecki, Christopher; Avitabile, Peter; Schmidt, Timothy

    2011-04-01

    In the area of modal test/analysis/correlation, significant effort has been expended over the past twenty years in order to make reduced models and to expand test data for correlation and eventual updating of the finite element models. This has been restricted by vibration measurements which are traditionally limited to the location of relatively few applied sensors. Advances in computers and digital imaging technology have allowed 3D digital image correlation (DIC) methods to measure the shape and deformation of a vibrating structure. This technique allows for full-field measurement of structural response, thus providing a wealth of simultaneous test data. This paper presents some preliminary results for the test/analysis/correlation of data measured using the DIC approach along with traditional accelerometers and a scanning laser vibrometer for comparison to a finite element model. The results indicate that all three approaches correlated well with the finite element model and provide validation for the DIC approach for full-field vibration measurement. Some of the advantages and limitations of the technique are presented and discussed.

  4. Neurostimulation in clinical and sub-clinical eating disorders: a systematic update of the literature.

    PubMed

    Dalton, Bethan; Bartholdy, Savani; Campbell, Iain C; Schmidt, Ulrike

    2018-01-07

    Whilst psychological therapies are the main approach to treatment of eating disorders (EDs), advances in aetiological research suggest the need for the development of more targeted, brain-focused treatments. A range of neurostimulation approaches, most prominently repetitive transcranial magnetic stimulation (rTMS), transcranial direct current stimulation (tDCS) and deep brain stimulation (DBS), are rapidly emerging as potential novel interventions. We have previously reviewed these techniques as potential treatments of EDs. To provide an update of the literature examining the effects of DBS, rTMS and tDCS on eating behaviours, body weight and associated symptoms in people with EDs and relevant analogue populations. Using PRISMA guidelines, we reviewed articles in PubMed, Web of Science, and PsycINFO from 1st January 2013 until 14th August 2017, to update our earlier search. Studies assessing the effects of neurostimulation techniques on eating and weight-related outcomes in people with EDs and relevant analogue populations were included. Data from both searches were combined. We included a total of 32 studies (526 participants); of these, 18 were newly identified by our update search. Whilst findings are somewhat mixed for bulimia nervosa, neurostimulation techniques have shown potential in the treatment of other EDs, in terms of reduction of ED and associated symptoms. Studies exploring cognitive, neural, and hormonal correlates of these techniques are also beginning to appear. Neurostimulation approaches show promise as treatments for EDs. As yet, large well-conducted randomised controlled trials are lacking. More information is needed about treatment targets, stimulation parameters and mechanisms of action. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  5. Modeling Gas and Gas Hydrate Accumulation in Marine Sediments Using a K-Nearest Neighbor Machine-Learning Technique

    NASA Astrophysics Data System (ADS)

    Wood, W. T.; Runyan, T. E.; Palmsten, M.; Dale, J.; Crawford, C.

    2016-12-01

    Natural Gas (primarily methane) and gas hydrate accumulations require certain bio-geochemical, as well as physical conditions, some of which are poorly sampled and/or poorly understood. We exploit recent advances in the prediction of seafloor porosity and heat flux via machine learning techniques (e.g. Random forests and Bayesian networks) to predict the occurrence of gas and subsequently gas hydrate in marine sediments. The prediction (actually guided interpolation) of key parameters we use in this study is a K-nearest neighbor technique. KNN requires only minimal pre-processing of the data and predictors, and requires minimal run-time input so the results are almost entirely data-driven. Specifically we use new estimates of sedimentation rate and sediment type, along with recently derived compaction modeling to estimate profiles of porosity and age. We combined the compaction with seafloor heat flux to estimate temperature with depth and geologic age, which, with estimates of organic carbon, and models of methanogenesis yield limits on the production of methane. Results include geospatial predictions of gas (and gas hydrate) accumulations, with quantitative estimates of uncertainty. The Generic Earth Modeling System (GEMS) we have developed to derive the machine learning estimates is modular and easily updated with new algorithms or data.

  6. Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth

    Treesearch

    Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak

    1985-01-01

    GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.

  7. MO-AB-BRA-09: Development and Evaluation of a Biomechanical Modeling-Assisted CBCT Reconstruction Technique (Bio-Recon)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y; Nasehi Tehrani, J; Wang, J

    Purpose: To develop a Bio-recon technique by incorporating the biomechanical properties of anatomical structures into the deformation-based CBCT reconstruction process. Methods: Bio-recon reconstructs the CBCT by deforming a prior high-quality CT/CBCT using a deformation-vector-field (DVF). The DVF is solved through two alternating steps: 2D–3D deformation and finite-element-analysis based biomechanical modeling. 2D–3D deformation optimizes the DVF through an ‘intensity-driven’ approach, which updates the DVF to minimize intensity mismatches between the acquired projections and the simulated projections from the deformed CBCT. In contrast, biomechanical modeling optimizes the DVF through a ‘biomechanical-feature-driven’ approach, which updates the DVF based on the biophysical properties ofmore » anatomical structures. In general, Biorecon extracts the 2D–3D deformation-optimized DVF at high-contrast structure boundaries, and uses it as the boundary condition to drive biomechanical modeling to optimize the overall DVF, especially at low-contrast regions. The optimized DVF is fed back into the 2D–3D deformation for further optimization, which forms an iterative loop. The efficacy of Bio-recon was evaluated on 11 lung patient cases, each with a prior CT and a new CT. Cone-beam projections were generated from the new CTs to reconstruct CBCTs, which were compared with the original new CTs for evaluation. 872 anatomical landmarks were also manually identified by a clinician on both the prior and new CTs to track the lung motion, which was used to evaluate the DVF accuracy. Results: Using 10 projections for reconstruction, the average (± s.d.) relative errors of reconstructed CBCTs by the clinical FDK technique, the 2D–3D deformation-only technique and Bio-recon were 46.5±5.9%, 12.0±2.3% and 10.4±1.3%, respectively. The average residual errors of DVF-tracked landmark motion by the 2D–3D deformation-only technique and Bio-recon were 5.6±4.3mm and 3.1±2.4mm, respectively. Conclusion: Bio-recon improved accuracy for both the reconstructed CBCT and the DVF. The accurate DVF can benefit multiple clinical practices, such as image-guided adaptive radiotherapy. We acknowledge funding support from the American Cancer Society (RSG-13-326-01-CCE), from the US National Institutes of Health (R01 EB020366), and from the Cancer Prevention and Research Institute of Texas (RP130109).« less

  8. Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis

    NASA Astrophysics Data System (ADS)

    Awrangjeb, M.; Fraser, C. S.; Lu, G.

    2015-08-01

    Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.

  9. User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.

    MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.

  10. Evolution of robot-assisted orthotopic ileal neobladder formation: a step-by-step update to the University of Southern California (USC) technique.

    PubMed

    Chopra, Sameer; de Castro Abreu, Andre Luis; Berger, Andre K; Sehgal, Shuchi; Gill, Inderbir; Aron, Monish; Desai, Mihir M

    2017-01-01

    To describe our, step-by-step, technique for robotic intracorporeal neobladder formation. The main surgical steps to forming the intracorporeal orthotopic ileal neobladder are: isolation of 65 cm of small bowel; small bowel anastomosis; bowel detubularisation; suture of the posterior wall of the neobladder; neobladder-urethral anastomosis and cross folding of the pouch; and uretero-enteral anastomosis. Improvements have been made to these steps to enhance time efficiency without compromising neobladder configuration. Our technical improvements have resulted in an improvement in operative time from 450 to 360 min. We describe an updated step-by-step technique of robot-assisted intracorporeal orthotopic ileal neobladder formation. © 2016 The Authors BJU International © 2016 BJU International Published by John Wiley & Sons Ltd.

  11. Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2

    DOE PAGES

    Leggett, Richard W.

    2017-03-02

    Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less

  12. Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leggett, Richard W.

    Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less

  13. The Construction of Visual-spatial Situation Models in Children's Reading and Their Relation to Reading Comprehension

    PubMed Central

    Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.

    2014-01-01

    Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376

  14. A groundwater data assimilation application study in the Heihe mid-reach

    NASA Astrophysics Data System (ADS)

    Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.

    2017-12-01

    The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.

  15. Semantic Segmentation and Difference Extraction via Time Series Aerial Video Camera and its Application

    NASA Astrophysics Data System (ADS)

    Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.

    2015-04-01

    Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.

  16. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    EPA Pesticide Factsheets

    The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size distribution, and the model's parameterization of the sea salt emission factor as a function of sea surface temperature. This dataset is associated with the following publication:Gantt , B., J. Kelly , and J. Bash. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 8: 3733-3746, (2015).

  17. Improving national-scale invasion maps: Tamarisk in the western United States

    USGS Publications Warehouse

    Jarnevich, C.S.; Evangelista, P.; Stohlgren, T.J.; Morisette, J.

    2011-01-01

    New invasions, better field data, and novel spatial-modeling techniques often drive the need to revisit previous maps and models of invasive species. Such is the case with the at least 10 species of Tamarix, which are invading riparian systems in the western United States and expanding their range throughout North America. In 2006, we developed a National Tamarisk Map by using a compilation of presence and absence locations with remotely sensed data and statistical modeling techniques. Since the publication of that work, our database of Tamarix distributions has grown significantly. Using the updated database of species occurrence, new predictor variables, and the maximum entropy (Maxent) model, we have revised our potential Tamarix distribution map for the western United States. Distance-to-water was the strongest predictor in the model (58.1%), while mean temperature of the warmest quarter was the second best predictor (18.4%). Model validation, averaged from 25 model iterations, indicated that our analysis had strong predictive performance (AUC = 0.93) and that the extent of Tamarix distributions is much greater than previously thought. The southwestern United States had the greatest suitable habitat, and this result differed from the 2006 model. Our work highlights the utility of iterative modeling for invasive species habitat modeling as new information becomes available. ?? 2011.

  18. A VLF-based technique in applications to digital control of nonlinear hybrid multirate systems

    NASA Astrophysics Data System (ADS)

    Vassilyev, Stanislav; Ulyanov, Sergey; Maksimkin, Nikolay

    2017-01-01

    In this paper, a technique for rigorous analysis and design of nonlinear multirate digital control systems on the basis of the reduction method and sublinear vector Lyapunov functions is proposed. The control system model under consideration incorporates continuous-time dynamics of the plant and discrete-time dynamics of the controller and takes into account uncertainties of the plant, bounded disturbances, nonlinear characteristics of sensors and actuators. We consider a class of multirate systems where the control update rate is slower than the measurement sampling rates and periodic non-uniform sampling is admitted. The proposed technique does not use the preliminary discretization of the system, and, hence, allows one to eliminate the errors associated with the discretization and improve the accuracy of analysis. The technique is applied to synthesis of digital controller for a flexible spacecraft in the fine stabilization mode and decentralized controller for a formation of autonomous underwater vehicles. Simulation results are provided to validate the good performance of the designed controllers.

  19. Space Shuttle/TDRSS communication and tracking systems analysis

    NASA Astrophysics Data System (ADS)

    Lindsey, W. C.; Chie, C. M.; Cideciyan, R.; Dessouky, K.; Su, Y. T.; Tsang, C. S.

    1986-04-01

    In order to evaluate the technical and operational problem areas and provide a recommendation, the enhancements to the Tracking and Data Delay Satellite System (TDRSS) and Shuttle must be evaluated through simulation and analysis. These enhancement techniques must first be characterized, then modeled mathematically, and finally updated into LinCsim (analytical simulation package). The LinCsim package can then be used as an evaluation tool. Three areas of potential enhancements were identified: shuttle payload accommodations, TDRSS SSA and KSA services, and shuttle tracking system and navigation sensors. Recommendations for each area were discussed.

  20. The Experimental Measurement of Aerodynamic Heating About Complex Shapes at Supersonic Mach Numbers

    NASA Technical Reports Server (NTRS)

    Neumann, Richard D.; Freeman, Delma C.

    2011-01-01

    In 2008 a wind tunnel test program was implemented to update the experimental data available for predicting protuberance heating at supersonic Mach numbers. For this test the Langley Unitary Wind Tunnel was also used. The significant differences for this current test were the advances in the state-of-the-art in model design, fabrication techniques, instrumentation and data acquisition capabilities. This current paper provides a focused discussion of the results of an in depth analysis of unique measurements of recovery temperature obtained during the test.

  1. Posttest calculation of the PBF LOC-11B and LOC-11C experiments using RELAP4/MOD6. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, C.E.

    Comparisons between RELAP4/MOD6, Update 4 code-calculated and measured experimental data are presented for the PBF LOC-11C and LOC-11B experiments. Independent code verification techniques are now being developed and this study represents a preliminary effort applying structured criteria for developing computer models, selecting code input, and performing base-run analyses. Where deficiencies are indicated in the base-case representation of the experiment, methods of code and criteria improvement are developed and appropriate recommendations are made.

  2. Space Shuttle/TDRSS communication and tracking systems analysis

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Chie, C. M.; Cideciyan, R.; Dessouky, K.; Su, Y. T.; Tsang, C. S.

    1986-01-01

    In order to evaluate the technical and operational problem areas and provide a recommendation, the enhancements to the Tracking and Data Delay Satellite System (TDRSS) and Shuttle must be evaluated through simulation and analysis. These enhancement techniques must first be characterized, then modeled mathematically, and finally updated into LinCsim (analytical simulation package). The LinCsim package can then be used as an evaluation tool. Three areas of potential enhancements were identified: shuttle payload accommodations, TDRSS SSA and KSA services, and shuttle tracking system and navigation sensors. Recommendations for each area were discussed.

  3. Interactive Management and Updating of Spatial Data Bases

    NASA Technical Reports Server (NTRS)

    French, P.; Taylor, M.

    1982-01-01

    The decision making process, whether for power plant siting, load forecasting or energy resource planning, invariably involves a blend of analytical methods and judgement. Management decisions can be improved by the implementation of techniques which permit an increased comprehension of results from analytical models. Even where analytical procedures are not required, decisions can be aided by improving the methods used to examine spatially and temporally variant data. How the use of computer aided planning (CAP) programs and the selection of a predominant data structure, can improve the decision making process is discussed.

  4. The KASCADE-Grande Experiment

    NASA Astrophysics Data System (ADS)

    de Souza, V.; Apel, W. D.; Arteaga, J. C.; Badea, F.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Brüggemann, M.; Buchholz, P.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuhrmann, D.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Klages, H. O.; Kolotaev, Y.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Navarra, G.; Nehls, S.; Oehlschläger, J.; Ostapchenko, S.; Over, S.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schröder, F.; Sima, O.; Stümpert, M.; Toma, G.; Trinchero, G. C.; Ulrich, H.; van Buren, J.; Walkowiak, W.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.

    2009-04-01

    KASCADE-Grande is a multi-component detector located at Karlsruhe, Germany. It was optimized to measure cosmic ray air showers with energies between 5×1016 and 1018 eV. Its capabilities are based on the use of several techniques to measure the electromagnetic and muon components of the shower in an independent way which allows a direct comparison to hadronic interaction models and a good estimation of the primary cosmic ray composition. In this paper, we present the status of the experiment, an update of the data analysis and the latest results.

  5. Simulated and observed 2010 floodwater elevations in selected river reaches in the Pawtuxet River Basin, Rhode Island

    USGS Publications Warehouse

    Zarriello, Phillip J.; Olson, Scott A.; Flynn, Robert H.; Strauch, Kellan R.; Murphy, Elizabeth A.

    2014-01-01

    Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term streamgages in Rhode Island. In response to this event, hydraulic models were updated for selected reaches covering about 56 river miles in the Pawtuxet River Basin to simulate water-surface elevations (WSEs) at specified flows and boundary conditions. Reaches modeled included the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Dry Brook, Meshanticut Brook, Furnace Hill Brook, Flat River, Quidneck Brook, and two unnamed tributaries referred to as South Branch Pawtuxet River Tributary A1 and Tributary A2. All the hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) version 4.1.0 using steady-state simulations. Updates to the models included incorporation of new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were assessed using high-water marks (HWMs) obtained in a related study following the March– April 2010 flood and the simulated water levels at the 0.2-percent annual exceedance probability (AEP), which is the estimated AEP of the 2010 flood in the basin. HWMs were obtained at 110 sites along the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Furnace Hill Brook, Flat River, and Quidneck Brook. Differences between the 2010 HWM elevations and the simulated 0.2-percent AEP WSEs from flood insurance studies (FISs) and the updated models developed in this study varied with most differences attributed to the magnitude of the 0.2-percent AEP flows. WSEs from the updated models generally are in closer agreement with the observed 2010 HWMs than with the FIS WSEs. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.

  6. Addressing forecast uncertainty impact on CSP annual performance

    NASA Astrophysics Data System (ADS)

    Ferretti, Fabio; Hogendijk, Christopher; Aga, Vipluv; Ehrsam, Andreas

    2017-06-01

    This work analyzes the impact of weather forecast uncertainty on the annual performance of a Concentrated Solar Power (CSP) plant. Forecast time series has been produced by a commercial forecast provider using the technique of hindcasting for the full year 2011 in hourly resolution for Ouarzazate, Morocco. Impact of forecast uncertainty has been measured on three case studies, representing typical tariff schemes observed in recent CSP projects plus a spot market price scenario. The analysis has been carried out using an annual performance model and a standard dispatch optimization algorithm based on dynamic programming. The dispatch optimizer has been demonstrated to be a key requisite to maximize the annual revenues depending on the price scenario, harvesting the maximum potential out of the CSP plant. Forecasting uncertainty affects the revenue enhancement outcome of a dispatch optimizer depending on the error level and the price function. Results show that forecasting accuracy of direct solar irradiance (DNI) is important to make best use of an optimized dispatch but also that a higher number of calculation updates can partially compensate this uncertainty. Improvement in revenues can be significant depending on the price profile and the optimal operation strategy. Pathways to achieve better performance are presented by having more updates both by repeatedly generating new optimized trajectories but also more often updating weather forecasts. This study shows the importance of working on DNI weather forecasting for revenue enhancement as well as selecting weather services that can provide multiple updates a day and probabilistic forecast information.

  7. Predicting remaining life by fusing the physics of failure modeling with diagnostics

    NASA Astrophysics Data System (ADS)

    Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.

    2004-03-01

    Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.

  8. The Silicon Trypanosome: a test case of iterative model extension in systems biology

    PubMed Central

    Achcar, Fiona; Fadda, Abeer; Haanstra, Jurgen R.; Kerkhoven, Eduard J.; Kim, Dong-Hyun; Leroux, Alejandro E.; Papamarkou, Theodore; Rojas, Federico; Bakker, Barbara M.; Barrett, Michael P.; Clayton, Christine; Girolami, Mark; Luise Krauth-Siegel, R.; Matthews, Keith R.; Breitling, Rainer

    2016-01-01

    The African trypanosome, Trypanosoma brucei, is a unicellular parasite causing African Trypanosomiasis (sleeping sickness in humans and nagana in animals). Due to some of its unique properties, it has emerged as a popular model organism in systems biology. A predictive quantitative model of glycolysis in the bloodstream form of the parasite has been constructed and updated several times. The Silicon Trypanosome (SilicoTryp) is a project that brings together modellers and experimentalists to improve and extend this core model with new pathways and additional levels of regulation. These new extensions and analyses use computational methods that explicitly take different levels of uncertainty into account. During this project, numerous tools and techniques have been developed for this purpose, which can now be used for a wide range of different studies in systems biology. PMID:24797926

  9. TOPEX Radar Altimeter Engineering Assessment Report. Update: Launch to January 1, 1997

    NASA Technical Reports Server (NTRS)

    Hancock, D. W., III; Hayne, G. S.; Brooks, R. L.; Lee, J. E.; Lockwood, D. W.

    1997-01-01

    The initial TOPEX Mission Radar Altimeter Engineering Assessment Report, in February 1994, presented performance results for the NASA Radar Altimeter on the TOPEX/POSEIDON spacecraft, from its launch in August 1992 to February 1994. There have been supplemental Engineering Assessment Reports, issued in March 1995 and again in May 1996, which updated the performance results through the end of calendar years 1994 and 1995, respectively. This supplement updates the altimeter performance to the end of calendar year 1996, and describes significant events that occurred during 1996. As the performance data base has expanded, and as analysis tools and techniques continue to evolve, the longer-term trends of the altimeter data have become more apparent. The updated findings are presented here.

  10. Augmentation cystoplasty in neurogenic bladder

    PubMed Central

    Kocjancic, Ervin; Demirdağ, Çetin

    2016-01-01

    The aim of this review is to update the indications, contraindications, technique, complications, and the tissue engineering approaches of augmentation cystoplasty (AC) in patients with neurogenic bladder. PubMed/MEDLINE was searched for the keywords "augmentation cystoplasty," "neurogenic bladder," and "bladder augmentation." Additional relevant literature was determined by examining the reference lists of articles identified through the search. The update review of of the indications, contraindications, technique, outcome, complications, and tissue engineering approaches of AC in patients with neurogenic bladder is presented. Although some important progress has been made in tissue engineering AC, conventional AC still has an important role in the surgical treatment of refractory neurogenic lower urinary tract dysfunction. PMID:27617312

  11. Ares I-X Flight Evaluation Tasks in Support of Ares I Development

    NASA Technical Reports Server (NTRS)

    Huebner, Lawrence D.; Richards, James S.; Coates, Ralph H., III; Cruit, Wendy D.; Ramsey, Matthew N.

    2010-01-01

    NASA s Constellation Program successfully launched the Ares I-X Flight Test Vehicle on October 28, 2009. The Ares I-X flight was a development flight test that offered a unique opportunity for early engineering data to impact the design and development of the Ares I crew launch vehicle. As the primary customer for flight data from the Ares I-X mission, the Ares Projects Office established a set of 33 flight evaluation tasks to correlate fight results with prospective design assumptions and models. Included within these tasks were direct comparisons of flight data with pre-flight predictions and post-flight assessments utilizing models and modeling techniques being applied to design and develop Ares I. A discussion of the similarities and differences in those comparisons and the need for discipline-level model updates based upon those comparisons form the substance of this paper. The benefits of development flight testing were made evident by implementing these tasks that used Ares I-X data to partially validate tools and methodologies in technical disciplines that will ultimately influence the design and development of Ares I and future launch vehicles. The areas in which partial validation from the flight test was most significant included flight control system algorithms to predict liftoff clearance, ascent, and stage separation; structural models from rollout to separation; thermal models that have been updated based on these data; pyroshock attenuation; and the ability to predict complex flow fields during time-varying conditions including plume interactions.

  12. Forecasting Geomagnetic Activity Using Kalman Filters

    NASA Astrophysics Data System (ADS)

    Veeramani, T.; Sharma, A.

    2006-05-01

    The coupling of energy from the solar wind to the magnetosphere leads to the geomagnetic activity in the form of storms and substorms and are characterized by indices such as AL, Dst and Kp. The geomagnetic activity has been predicted near-real time using local linear filter models of the system dynamics wherein the time series of the input solar wind and the output magnetospheric response were used to reconstruct the phase space of the system by a time-delay embedding technique. Recently, the radiation belt dynamics have been studied using a adaptive linear state space model [Rigler et al. 2004]. This was achieved by assuming a linear autoregressive equation for the underlying process and an adaptive identification of the model parameters using a Kalman filter approach. We use such a model for predicting the geomagnetic activity. In the case of substorms, the Bargatze et al [1985] data set yields persistence like behaviour when a time resolution of 2.5 minutes was used to test the model for the prediction of the AL index. Unlike the local linear filters, which are driven by the solar wind input without feedback from the observations, the Kalman filter makes use of the observations as and when available to optimally update the model parameters. The update procedure requires the prediction intervals to be long enough so that the forecasts can be used in practice. The time resolution of the data suitable for such forecasting is studied by taking averages over different durations.

  13. Kinematic Model-Based Pedestrian Dead Reckoning for Heading Correction and Lower Body Motion Tracking.

    PubMed

    Lee, Min Su; Ju, Hojin; Song, Jin Woo; Park, Chan Gook

    2015-11-06

    In this paper, we present a method for finding the enhanced heading and position of pedestrians by fusing the Zero velocity UPdaTe (ZUPT)-based pedestrian dead reckoning (PDR) and the kinematic constraints of the lower human body. ZUPT is a well known algorithm for PDR, and provides a sufficiently accurate position solution for short term periods, but it cannot guarantee a stable and reliable heading because it suffers from magnetic disturbance in determining heading angles, which degrades the overall position accuracy as time passes. The basic idea of the proposed algorithm is integrating the left and right foot positions obtained by ZUPTs with the heading and position information from an IMU mounted on the waist. To integrate this information, a kinematic model of the lower human body, which is calculated by using orientation sensors mounted on both thighs and calves, is adopted. We note that the position of the left and right feet cannot be apart because of the kinematic constraints of the body, so the kinematic model generates new measurements for the waist position. The Extended Kalman Filter (EKF) on the waist data that estimates and corrects error states uses these measurements and magnetic heading measurements, which enhances the heading accuracy. The updated position information is fed into the foot mounted sensors, and reupdate processes are performed to correct the position error of each foot. The proposed update-reupdate technique consequently ensures improved observability of error states and position accuracy. Moreover, the proposed method provides all the information about the lower human body, so that it can be applied more effectively to motion tracking. The effectiveness of the proposed algorithm is verified via experimental results, which show that a 1.25% Return Position Error (RPE) with respect to walking distance is achieved.

  14. Progress Implementing a Model-Based Iterative Reconstruction Algorithm for Ultrasound Imaging of Thick Concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thickmore » concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.« less

  15. Progress implementing a model-based iterative reconstruction algorithm for ultrasound imaging of thick concrete

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2017-02-01

    All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.

  16. Basketball.

    ERIC Educational Resources Information Center

    Gudger, Jim, Ed.; Barnes, Mildred, Ed.

    1983-01-01

    Techniques to help update and improve the teaching of basketball are described, including: (1) drills to increase physical fitness and motor skill development; (2) the use of drill stations to practice specific playing skills; (3) offensive and defensive techniques; and (4) teaching free-throws and rebounding. (PP)

  17. Machine learning in updating predictive models of planning and scheduling transportation projects

    DOT National Transportation Integrated Search

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  18. Benefits of Model Updating: A Case Study Using the Micro-Precision Interferometer Testbed

    NASA Technical Reports Server (NTRS)

    Neat, Gregory W.; Kissil, Andrew; Joshi, Sanjay S.

    1997-01-01

    This paper presents a case study on the benefits of model updating using the Micro-Precision Interferometer (MPI) testbed, a full-scale model of a future spaceborne optical interferometer located at JPL.

  19. Preliminary Model of Porphyry Copper Deposits

    USGS Publications Warehouse

    Berger, Byron R.; Ayuso, Robert A.; Wynn, Jeffrey C.; Seal, Robert R.

    2008-01-01

    The U.S. Geological Survey (USGS) Mineral Resources Program develops mineral-deposit models for application in USGS mineral-resource assessments and other mineral resource-related activities within the USGS as well as for nongovernmental applications. Periodic updates of models are published in order to incorporate new concepts and findings on the occurrence, nature, and origin of specific mineral deposit types. This update is a preliminary model of porphyry copper deposits that begins an update process of porphyry copper models published in USGS Bulletin 1693 in 1986. This update includes a greater variety of deposit attributes than were included in the 1986 model as well as more information about each attribute. It also includes an expanded discussion of geophysical and remote sensing attributes and tools useful in resource evaluations, a summary of current theoretical concepts of porphyry copper deposit genesis, and a summary of the environmental attributes of unmined and mined deposits.

  20. LIPS database with LIPService: a microscopic image database of intracellular structures in Arabidopsis guard cells.

    PubMed

    Higaki, Takumi; Kutsuna, Natsumaro; Hasezawa, Seiichiro

    2013-05-16

    Intracellular configuration is an important feature of cell status. Recent advances in microscopic imaging techniques allow us to easily obtain a large number of microscopic images of intracellular structures. In this circumstance, automated microscopic image recognition techniques are of extreme importance to future phenomics/visible screening approaches. However, there was no benchmark microscopic image dataset for intracellular organelles in a specified plant cell type. We previously established the Live Images of Plant Stomata (LIPS) database, a publicly available collection of optical-section images of various intracellular structures of plant guard cells, as a model system of environmental signal perception and transduction. Here we report recent updates to the LIPS database and the establishment of a database table, LIPService. We updated the LIPS dataset and established a new interface named LIPService to promote efficient inspection of intracellular structure configurations. Cell nuclei, microtubules, actin microfilaments, mitochondria, chloroplasts, endoplasmic reticulum, peroxisomes, endosomes, Golgi bodies, and vacuoles can be filtered using probe names or morphometric parameters such as stomatal aperture. In addition to the serial optical sectional images of the original LIPS database, new volume-rendering data for easy web browsing of three-dimensional intracellular structures have been released to allow easy inspection of their configurations or relationships with cell status/morphology. We also demonstrated the utility of the new LIPS image database for automated organelle recognition of images from another plant cell image database with image clustering analyses. The updated LIPS database provides a benchmark image dataset for representative intracellular structures in Arabidopsis guard cells. The newly released LIPService allows users to inspect the relationship between organellar three-dimensional configurations and morphometrical parameters.

  1. Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion

    PubMed Central

    Medendorp, W. P.

    2015-01-01

    It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289

  2. Build-Up Approach to Updating the Mock Quiet Spike Beam Model

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia Y.; Pak, Chan-gi

    2007-01-01

    When a new aircraft is designed or a modification is done to an existing aircraft, the aeroelastic properties of the aircraft should be examined to ensure the aircraft is flight worthy. Evaluating the aeroelastic properties of a new or modified aircraft can include performing a variety of analyses, such as modal and flutter analyses. In order to produce accurate results from these analyses, it is imperative to work with finite element models (FEM) that have been validated by or correlated to ground vibration test (GVT) data, Updating an analytical model using measured data is a challenge in the area of structural dynamics. The analytical model update process encompasses a series of optimizations that match analytical frequencies and mode shapes to the measured modal characteristics of structure. In the past, the method used to update a model to test data was "trial and error." This is an inefficient method - running a modal analysis, comparing the analytical results to the GVT data, manually modifying one or more structural parameters (mass, CG, inertia, area, etc.), rerunning the analysis, and comparing the new analytical modal characteristics to the GVT modal data. If the match is close enough (close enough defined by analyst's updating requirements), then the updating process is completed. If the match does not meet updating-requirements, then the parameters are changed again and the process is repeated. Clearly, this manual optimization process is highly inefficient for large FEM's and/or a large number of structural parameters. NASA Dryden Flight Research Center (DFRC) has developed, in-house, a Mode Matching Code that automates the above-mentioned optimization process, DFRC's in-house Mode Matching Code reads mode shapes and frequencies acquired from GVT to create the target model. It also reads the current analytical model, as we11 as the design variables and their upper and lower limits. It performs a modal analysis on this model and modifies it to create an updated model that has similar mode shapes and frequencies as those of the target model. The Mode Matching Code output frequencies and modal assurance criteria (MAC) values that allow for the quantified comparison of the updated model versus the target model. A recent application of this code is the F453 supersonic flight testing platform, NASA DFRC possesses a modified F-15B that is used as a test bed aircraft for supersonic flight experiments. Traditionally, the finite element model of the test article is generated. A GVT is done on the test article ta validate and update its FEM. This FEM is then mated to the F-15B model, which was correlated to GVT data in fall of 2004, A GVT is conducted with the test article mated to the aircraft, and this mated F-15B/ test article FEM is correlated to this final GVT.

  3. Nonlinear structural joint model updating based on instantaneous characteristics of dynamic responses

    NASA Astrophysics Data System (ADS)

    Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin

    2016-08-01

    This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.

  4. Simulated and observed 2010 floodwater elevations in the Pawcatuck and Wood Rivers, Rhode Island

    USGS Publications Warehouse

    Zarriello, Phillip J.; Straub, David E.; Smith, Thor E.

    2014-01-01

    Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term U.S. Geological Survey streamgages in Rhode Island. In response to this flood, hydraulic models of Pawcatuck River (26.9 miles) and Wood River (11.6 miles) were updated from the most recent approved U.S. Department of Homeland Security-Federal Emergency Management Agency flood insurance study (FIS) to simulate water-surface elevations (WSEs) for specified flows and boundary conditions. The hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) using steady-state simulations and incorporate new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were used to simulate the 0.2-percent annual exceedance probability (AEP) flood, which is the AEP determined for the 2010 flood in the Pawcatuck and Wood Rivers. The simulated WSEs were compared to high-water mark (HWM) elevation data obtained in a related study following the March–April 2010 flood, which included 39 HWMs along the Pawcatuck River and 11 HWMs along the Wood River. The 2010 peak flow generally was larger than the 0.2-percent AEP flow, which, in part, resulted in the FIS and updated model WSEs to be lower than the 2010 HWMs. The 2010 HWMs for the Pawcatuck River averaged about 1.6 feet (ft) higher than the 0.2-percent AEP WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The 2010 HWMs for the Wood River averaged about 1.3 ft higher than the WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.

  5. Management of undescended testis may be improved with educational updates and new transferring model.

    PubMed

    Yi, Wei; Sheng-de, Wu; Lian-Ju, Shen; Tao, Lin; Da-Wei, He; Guang-Hui, Wei

    2018-05-24

    To investigate whether management of undescended testis (UDT) may be improved with educational updates and new transferring model among referring providers (RPs). The age of orchidopexies performed in Children's Hospital of Chongqing Medical University were reviewed. We then proposed educational updates and new transferring model among RPs. The age of orchidopexies performed after our intervention were collected. Data were represented graphically and statistical analysis Chi-square for trend were used. A total of 1543 orchidopexies were performed. The median age of orchidopexy did not matched the target age of 6-12 months in any subsequent year. Survey of the RPs showed that 48.85% of their recommended age was below 12 months. However, only 25.50% of them would directly make a surgical referral to pediatric surgery specifically at this point. After we proposed educational updates, tracking the age of orchidopexy revealed a statistically significant trend downward. The management of undescended testis may be improved with educational updates and new transferring model among primary healthcare practitioners.

  6. Formulation of consumables management models: Consumables flight planning worksheet update. [space shuttles

    NASA Technical Reports Server (NTRS)

    Newman, C. M.

    1977-01-01

    The updated consumables flight planning worksheet (CFPWS) is documented. The update includes: (1) additional consumables: ECLSS ammonia, APU propellant, HYD water; (2) additional on orbit activity for development flight instrumentation (DFI); (3) updated use factors for all consumables; and (4) sources and derivations of the use factors.

  7. Atomic spectrometry update - atomic mass spectrometry.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacon, J.; Crain, J. S.; McMahon, A. W.

    The MS and XRF updates have been published together since their introduction in 1988. In the last few years, however, the two sections have been prepared independently of each other and it therefore seemed appropriate to publish the two sections separately. With effect from this issue, the MS Update will appear in the October issue of JAAS and the XRF Update in the November issue. The format used for the MS section is broadly similar to that used last year, with some additional sub-headings. This Update is intended to cover all atomic and stable isotopic MS techniques, but not thosemore » used in studies of fundamental nuclear physics and exotic nuclei far from stability. Also excluded are those reports in which MS is used as a tool in the study of molecular processes and of gaseous components. the review is based on critical selection of developments in instrumentation and methodology, notable for their innovation, originality or achievement of significant advances, and is not intended to be comprehensive in its coverage. Conference papers are only included if they contain enough information to show they meet these criteria, and our policy in general remains one of waiting for a development to appear in a full paper before inclusion in the review. a similar policy applies to foreign language papers unlikely to reach a wide audience. Routine applications of atomic MS are not included in this Update and the reader is referred to the Updates on Industrial Analysis: Metals, Chemicals and Advanced Materials (96/416), Environmental Analysis (96/1444) and Clinical and Biological Materials, Food and Beverages (96/2479). Also excluded are those applications, even if not routine, which use atomic spectroscopy as a tool for the study of a non-atomic property, for example, the use of stable isotope labeling of carbon or nitrogen in biomolecules in metabolic studies. There have been few general reviews on atomic MS of note in the period covered by this update. That of Colodner et al.(95/3890) gave a general review of ion sources, in particular GDMS, ICP-MS, SIMS and TIMS, and that of Blades (95/2568 and 95/3077) was a very general overview of some of the techniques covered in this Update. The review of the literature in the period covered by this Update reveals strong advances in all areas, with a continuing push to achieve better analyses on smaller samples and in less time. Most advances generally require more sophisticated instrumentation, improved sample preparation methods or new methods of sample introduction. This is typified by advances in ICP-MS, which see considerable emphasis on sample introduction techniques and a move towards magnetic sector instruments. Most applications of ICP-MS are now highly routine. There is still, however, a desire to achieve affordable analysis with simplified and cost-effective instruments, as illustrated by the development of mobile, in-situ isotope MS for environmental studies.« less

  8. Nonequivalence of updating rules in evolutionary games under high mutation rates.

    PubMed

    Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J

    2014-10-01

    Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.

  9. Nonequivalence of updating rules in evolutionary games under high mutation rates

    NASA Astrophysics Data System (ADS)

    Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.

    2014-10-01

    Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.

  10. Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-09-01

    Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.

  11. Evaluation of alternative model-data fusion approaches in water balance estimation across Australia

    NASA Astrophysics Data System (ADS)

    van Dijk, A. I. J. M.; Renzullo, L. J.

    2009-04-01

    Australia's national agencies are developing a continental modelling system to provide a range of water information services. It will include rolling water balance estimation to underpin national water accounts, water resources assessments that interpret current water resources availability and trends in a historical context, and water resources predictions coupled to climate and weather forecasting. The nation-wide coverage, currency, accuracy, and consistency required means that remote sensing will need to play an important role along with in-situ observations. Different approaches to blending models and observations can be considered. Integration of on-ground and remote sensing data into land surface models in atmospheric applications often involves state updating through model-data assimilation techniques. By comparison, retrospective water balance estimation and hydrological scenario modelling to date has mostly relied on static parameter fitting against observations and has made little use of earth observation. The model-data fusion approach most appropriate for a continental water balance estimation system will need to consider the trade-off between computational overhead and the accuracy gains achieved when using more sophisticated synthesis techniques and additional observations. This trade-off was investigated using a landscape hydrological model and satellite-based estimates of soil moisture and vegetation properties for aseveral gauged test catchments in southeast Australia.

  12. Processing arctic eddy-flux data using a simple carbon-exchange model embedded in the ensemble Kalman filter.

    PubMed

    Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W

    2010-07-01

    Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter performance, but it did not improve performance when used individually. The EnKF estimates of leaf area followed the expected springtime canopy phenology. However, there were also diel fluctuations in the leaf-area estimates; these are a clear indication of a model deficiency possibly related to vapor pressure effects on canopy conductance.

  13. Structural Health Monitoring of Large Structures

    NASA Technical Reports Server (NTRS)

    Kim, Hyoung M.; Bartkowicz, Theodore J.; Smith, Suzanne Weaver; Zimmerman, David C.

    1994-01-01

    This paper describes a damage detection and health monitoring method that was developed for large space structures using on-orbit modal identification. After evaluating several existing model refinement and model reduction/expansion techniques, a new approach was developed to identify the location and extent of structural damage with a limited number of measurements. A general area of structural damage is first identified and, subsequently, a specific damaged structural component is located. This approach takes advantage of two different model refinement methods (optimal-update and design sensitivity) and two different model size matching methods (model reduction and eigenvector expansion). Performance of the proposed damage detection approach was demonstrated with test data from two different laboratory truss structures. This space technology can also be applied to structural inspection of aircraft, offshore platforms, oil tankers, ridges, and buildings. In addition, its applications to model refinement will improve the design of structural systems such as automobiles and electronic packaging.

  14. Aircraft Mishap Exercise at SLF

    NASA Image and Video Library

    2018-02-14

    NASA Kennedy Space Center's Flight Operations prepares to rehearse a helicopter crash-landing to test new and updated emergency procedures. Called the Aircraft Mishap Preparedness and Contingency Plan, the operation was designed to validate several updated techniques the center's first responders would follow, should they ever need to rescue a crew in case of a real accident. The mishap exercise took place at the center's Shuttle Landing Facility.

  15. OSATE Overview & Community Updates

    DTIC Science & Technology

    2015-02-15

    update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Delange /Julien 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...main language capabilities Modeling patterns & model samples for beginners Error-Model examples EMV2 model constructs Demonstration of tools Case

  16. Real-time simulation of contact and cutting of heterogeneous soft-tissues.

    PubMed

    Courtecuisse, Hadrien; Allard, Jérémie; Kerfriden, Pierre; Bordas, Stéphane P A; Cotin, Stéphane; Duriez, Christian

    2014-02-01

    This paper presents a numerical method for interactive (real-time) simulations, which considerably improves the accuracy of the response of heterogeneous soft-tissue models undergoing contact, cutting and other topological changes. We provide an integrated methodology able to deal both with the ill-conditioning issues associated with material heterogeneities, contact boundary conditions which are one of the main sources of inaccuracies, and cutting which is one of the most challenging issues in interactive simulations. Our approach is based on an implicit time integration of a non-linear finite element model. To enable real-time computations, we propose a new preconditioning technique, based on an asynchronous update at low frequency. The preconditioner is not only used to improve the computation of the deformation of the tissues, but also to simulate the contact response of homogeneous and heterogeneous bodies with the same accuracy. We also address the problem of cutting the heterogeneous structures and propose a method to update the preconditioner according to the topological modifications. Finally, we apply our approach to three challenging demonstrators: (i) a simulation of cataract surgery (ii) a simulation of laparoscopic hepatectomy (iii) a brain tumor surgery. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. The diagnostic value of specific IgE to Ara h 2 to predict peanut allergy in children is comparable to a validated and updated diagnostic prediction model.

    PubMed

    Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A

    2013-01-01

    A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P < .001). In the updating process, age, history, and additional candidate predictors did not significantly increase discrimination, being 94%, and leaving only 4 predictors of the original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  18. Description and evaluation of the Community Multiscale Air ...

    EPA Pesticide Factsheets

    The Community Multiscale Air Quality (CMAQ) model is a comprehensive multipollutant air quality modeling system developed and maintained by the US Environmental Protection Agency's (EPA) Office of Research and Development (ORD). Recently, version 5.1 of the CMAQ model (v5.1) was released to the public, incorporating a large number of science updates and extended capabilities over the previous release version of the model (v5.0.2). These updates include the following: improvements in the meteorological calculations in both CMAQ and the Weather Research and Forecast (WRF) model used to provide meteorological fields to CMAQ, updates to the gas and aerosol chemistry, revisions to the calculations of clouds and photolysis, and improvements to the dry and wet deposition in the model. Sensitivity simulations isolating several of the major updates to the modeling system show that changes to the meteorological calculations result in enhanced afternoon and early evening mixing in the model, periods when the model historically underestimates mixing. This enhanced mixing results in higher ozone (O3) mixing ratios on average due to reduced NO titration, and lower fine particulate matter (PM2. 5) concentrations due to greater dilution of primary pollutants (e.g., elemental and organic carbon). Updates to the clouds and photolysis calculations greatly improve consistency between the WRF and CMAQ models and result in generally higher O3 mixing ratios, primarily due to reduced

  19. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  20. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  1. An interval model updating strategy using interval response surface models

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2015-08-01

    Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

  2. Analysis of seasonal characteristics of Sambhar Salt Lake, India, from digitized Space Shuttle photography

    NASA Technical Reports Server (NTRS)

    Lulla, Kamlesh P.; Helfert, Michael R.

    1989-01-01

    Sambhar Salt Lake is the largest salt lake (230 sq km) in India, situated in the northwest near Jaipur. Analysis of Space Shuttle photographs of this ephemeral lake reveals that water levels and lake basin land-use information can be extracted by both the digital and manual analysis techniques. Seasonal characteristics captured by the two Shuttle photos used in this study show that additional land use/cover categories can be mapped from the dry season photos. This additional information is essential for precise cartographic updates, and provides seasonal hydrologic profiles and inputs for potential mesoscale climate modeling. This paper extends the digitization and mensuration techniques originally developed for space photography and applied to other regions (e.g., Lake Chad, Africa, and Great Salt Lake, USA).

  3. Updating the Behavior Engineering Model.

    ERIC Educational Resources Information Center

    Chevalier, Roger

    2003-01-01

    Considers Thomas Gilbert's Behavior Engineering Model as a tool for systematically identifying barriers to individual and organizational performance. Includes a detailed case study and a performance aid that incorporates gap analysis, cause analysis, and force field analysis to update the original model. (Author/LRW)

  4. Spacecraft momentum management procedures. [large space telescope

    NASA Technical Reports Server (NTRS)

    Chen, L. C.; Davenport, P. B.; Sturch, C. R.

    1980-01-01

    Techniques appropriate for implementation onboard the space telescope and other spacecraft to manage the accumulation of momentum in reaction wheel control systems using magnetic torquing coils are described. Generalized analytical equations are derived for momentum control laws that command the magnetic torquers. These control laws naturally fall into two main categories according to the methods used for updating the magnetic dipole command: closed loop, in which the update is based on current measurements to achieve a desired torque instantaneously, and open-loop, in which the update is based on predicted information to achieve a desired momentum at the end of a period of time. Physical interpretations of control laws in general and of the Space Telescope cross product and minimum energy control laws in particular are presented, and their merits and drawbacks are discussed. A technique for retaining the advantages of both the open-loop and the closed-loop control laws is introduced. Simulation results are presented to compare the performance of these control laws in the Space Telescope environment.

  5. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    USGS Publications Warehouse

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  6. A rapid prototyping/artificial intelligence approach to space station-era information management and access

    NASA Technical Reports Server (NTRS)

    Carnahan, Richard S., Jr.; Corey, Stephen M.; Snow, John B.

    1989-01-01

    Applications of rapid prototyping and Artificial Intelligence techniques to problems associated with Space Station-era information management systems are described. In particular, the work is centered on issues related to: (1) intelligent man-machine interfaces applied to scientific data user support, and (2) the requirement that intelligent information management systems (IIMS) be able to efficiently process metadata updates concerning types of data handled. The advanced IIMS represents functional capabilities driven almost entirely by the needs of potential users. Space Station-era scientific data projected to be generated is likely to be significantly greater than data currently processed and analyzed. Information about scientific data must be presented clearly, concisely, and with support features to allow users at all levels of expertise efficient and cost-effective data access. Additionally, mechanisms for allowing more efficient IIMS metadata update processes must be addressed. The work reported covers the following IIMS design aspects: IIMS data and metadata modeling, including the automatic updating of IIMS-contained metadata, IIMS user-system interface considerations, including significant problems associated with remote access, user profiles, and on-line tutorial capabilities, and development of an IIMS query and browse facility, including the capability to deal with spatial information. A working prototype has been developed and is being enhanced.

  7. Information dissemination model for social media with constant updates

    NASA Astrophysics Data System (ADS)

    Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui

    2018-07-01

    With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.

  8. Seismic hazard in the eastern United States

    USGS Publications Warehouse

    Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison

    2015-01-01

    The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.

  9. Updated Tomographic Seismic Imaging at Kilauea Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Okubo, P.; Johnson, J.; Felts, E. S.; Flores, N.

    2013-12-01

    Improved and more detailed geophysical, geological, and geochemical observations and measurements at Kilauea, along with prolonged eruptions at its summit caldera and east rift zone, are encouraging more ambitious interpretation and modeling of volcanic processes over a range of temporal and spatial scales. We are updating three-dimensional models of seismic wave-speed distributions within Kilauea using local earthquake arrival time tomography to support waveform-based modeling of seismic source mechanisms. We start from a tomographic model derived from a combination of permanent seismic stations comprising the Hawaiian Volcano Observatory (HVO) seismographic network and a dense deployment of temporary stations in the Kilauea caldera region in 1996. Using P- and S-wave arrival times measured from the HVO network for local earthquakes from 1997 through 2012, we compute velocity models with the finite difference tomographic seismic imaging technique implemented by Benz and others (1996), and applied to numerous volcanoes including Kilauea. Particular impetus to our current modeling was derived from a focused effort to review seismicity occurring in Kilauea's summit caldera and adjoining regions in 2012. Our results reveal clear P-wave low-velocity features at and slightly below sea level beneath Kilauea's summit caldera, lying between Halemaumau Crater and the north-facing scarps that mark the southern caldera boundary. The results are also suggestive of changes in seismic velocity distributions between 1996 and 2012. One example of such a change is an apparent decrease in the size and southeastward extent, compared to the earlier model, of the low VP feature imaged with the more recent data. However, we recognize the distinct possibility that these changes are reflective of differences in earthquake and seismic station distributions in the respective datasets, and we need to further populate the more recent HVO seismicity catalogs to possibly address this concern. We also look forward to more complete implementation at HVO of seismic imaging techniques that use ambient seismic noise retrieved from continuous seismic recordings, and to using earthquake arrival times and ambient seismic noise jointly to tomographically image Kilauea.

  10. Probabilistic In Situ Stress Estimation and Forecasting using Sequential Data Assimilation

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Dinther, Y.; Kuensch, H. R.

    2017-12-01

    Our physical understanding and forecasting ability of earthquakes, and other solid Earth dynamic processes, is significantly hampered by limited indications on the evolving state of stress and strength on faults. Integrating observations and physics-based numerical modeling to quantitatively estimate this evolution of a fault's state is crucial. However, systematic attempts are limited and tenuous, especially in light of the scarcity and uncertainty of natural data and the difficulty of modelling the physics governing earthquakes. We adopt the statistical framework of sequential data assimilation - extensively developed for weather forecasting - to efficiently integrate observations and prior knowledge in a forward model, while acknowledging errors in both. To prove this concept we perform a perfect model test in a simplified subduction zone setup, where we assimilate synthetic noised data on velocities and stresses from a single location. Using an Ensemble Kalman Filter, these data and their errors are assimilated to update 150 ensemble members from a Partial Differential Equation-driven seismic cycle model. Probabilistic estimates of fault stress and dynamic strength evolution capture the truth exceptionally well. This is possible, because the sampled error covariance matrix contains prior information from the physics that relates velocities, stresses and pressure at the surface to those at the fault. During the analysis step, stress and strength distributions are thus reconstructed such that fault coupling can be updated to either inhibit or trigger events. In the subsequent forecast step the physical equations are solved to propagate the updated states forward in time and thus provide probabilistic information on the occurrence of the next event. At subsequent assimilation steps, the system's forecasting ability turns out to be significantly better than that of a periodic recurrence model (requiring an alarm 17% vs. 68% of the time). This thus provides distinct added value with respect to using observations or numerical models separately. Although several challenges for applications to a natural setting remain, these first results indicate the large potential of data assimilation techniques for probabilistic seismic hazard assessment and other challenges in dynamic solid earth systems.

  11. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  12. Tangle-Free Finite Element Mesh Motion for Ablation Problems

    NASA Technical Reports Server (NTRS)

    Droba, Justin

    2016-01-01

    Mesh motion is the process by which a computational domain is updated in time to reflect physical changes in the material the domain represents. Such a technique is needed in the study of the thermal response of ablative materials, which erode when strong heating is applied to the boundary. Traditionally, the thermal solver is coupled with a linear elastic or biharmonic system whose sole purpose is to update mesh node locations in response to altering boundary heating. Simple mesh motion algorithms rely on boundary surface normals. In such schemes, evolution in time will eventually cause the mesh to intersect and "tangle" with itself, causing failure. Furthermore, such schemes are greatly limited in the problems geometries on which they will be successful. This paper presents a comprehensive and sophisticated scheme that tailors the directions of motion based on context. By choosing directions for each node smartly, the inevitable tangle can be completely avoided and mesh motion on complex geometries can be modeled accurately.

  13. SPITZER 70 AND 160 {mu}m OBSERVATIONS OF THE COSMOS FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frayer, D. T.; Huynh, M. T.; Bhattacharya, B.

    2009-11-15

    We present Spitzer 70 and 160 {mu}m observations of the COSMOS Spitzer survey (S-COSMOS). The data processing techniques are discussed for the publicly released products consisting of images and source catalogs. We present accurate 70 and 160 {mu}m source counts of the COSMOS field and find reasonable agreement with measurements in other fields and with model predictions. The previously reported counts for GOODS-North and the extragalactic First Look Survey are updated with the latest calibration, and counts are measured based on the large area SWIRE survey to constrain the bright source counts. We measure an extragalactic confusion noise level ofmore » {sigma} {sub c} = 9.4 {+-} 3.3 mJy (q = 5) for the MIPS 160 {mu}m band based on the deep S-COSMOS data and report an updated confusion noise level of {sigma} {sub c} = 0.35 {+-} 0.15 mJy (q = 5) for the MIPS 70 {mu}m band.« less

  14. Disruption of the Right Temporoparietal Junction Impairs Probabilistic Belief Updating.

    PubMed

    Mengotti, Paola; Dombert, Pascasie L; Fink, Gereon R; Vossel, Simone

    2017-05-31

    Generating and updating probabilistic models of the environment is a fundamental modus operandi of the human brain. Although crucial for various cognitive functions, the neural mechanisms of these inference processes remain to be elucidated. Here, we show the causal involvement of the right temporoparietal junction (rTPJ) in updating probabilistic beliefs and we provide new insights into the chronometry of the process by combining online transcranial magnetic stimulation (TMS) with computational modeling of behavioral responses. Female and male participants performed a modified location-cueing paradigm, where false information about the percentage of cue validity (%CV) was provided in half of the experimental blocks to prompt updating of prior expectations. Online double-pulse TMS over rTPJ 300 ms (but not 50 ms) after target appearance selectively decreased participants' updating of false prior beliefs concerning %CV, reflected in a decreased learning rate of a Rescorla-Wagner model. Online TMS over rTPJ also impacted on participants' explicit beliefs, causing them to overestimate %CV. These results confirm the involvement of rTPJ in updating of probabilistic beliefs, thereby advancing our understanding of this area's function during cognitive processing. SIGNIFICANCE STATEMENT Contemporary views propose that the brain maintains probabilistic models of the world to minimize surprise about sensory inputs. Here, we provide evidence that the right temporoparietal junction (rTPJ) is causally involved in this process. Because neuroimaging has suggested that rTPJ is implicated in divergent cognitive domains, the demonstration of an involvement in updating internal models provides a novel unifying explanation for these findings. We used computational modeling to characterize how participants change their beliefs after new observations. By interfering with rTPJ activity through online transcranial magnetic stimulation, we showed that participants were less able to update prior beliefs with TMS delivered at 300 ms after target onset. Copyright © 2017 the authors 0270-6474/17/375419-10$15.00/0.

  15. Kalman filter data assimilation: targeting observations and parameter estimation.

    PubMed

    Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex

    2014-06-01

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.

  16. Analysis and Synthesis of Load Forecasting Data for Renewable Integration Studies: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steckler, N.; Florita, A.; Zhang, J.

    2013-11-01

    As renewable energy constitutes greater portions of the generation fleet, the importance of modeling uncertainty as part of integration studies also increases. In pursuit of optimal system operations, it is important to capture not only the definitive behavior of power plants, but also the risks associated with systemwide interactions. This research examines the dependence of load forecast errors on external predictor variables such as temperature, day type, and time of day. The analysis was utilized to create statistically relevant instances of sequential load forecasts with only a time series of historic, measured load available. The creation of such load forecastsmore » relies on Bayesian techniques for informing and updating the model, thus providing a basis for networked and adaptive load forecast models in future operational applications.« less

  17. Kalman filter data assimilation: Targeting observations and parameter estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex

    2014-06-15

    This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less

  18. Thoracic respiratory motion estimation from MRI using a statistical model and a 2-D image navigator.

    PubMed

    King, A P; Buerger, C; Tsoumpas, C; Marsden, P K; Schaeffter, T

    2012-01-01

    Respiratory motion models have potential application for estimating and correcting the effects of motion in a wide range of applications, for example in PET-MR imaging. Given that motion cycles caused by breathing are only approximately repeatable, an important quality of such models is their ability to capture and estimate the intra- and inter-cycle variability of the motion. In this paper we propose and describe a technique for free-form nonrigid respiratory motion correction in the thorax. Our model is based on a principal component analysis of the motion states encountered during different breathing patterns, and is formed from motion estimates made from dynamic 3-D MRI data. We apply our model using a data-driven technique based on a 2-D MRI image navigator. Unlike most previously reported work in the literature, our approach is able to capture both intra- and inter-cycle motion variability. In addition, the 2-D image navigator can be used to estimate how applicable the current motion model is, and hence report when more imaging data is required to update the model. We also use the motion model to decide on the best positioning for the image navigator. We validate our approach using MRI data acquired from 10 volunteers and demonstrate improvements of up to 40.5% over other reported motion modelling approaches, which corresponds to 61% of the overall respiratory motion present. Finally we demonstrate one potential application of our technique: MRI-based motion correction of real-time PET data for simultaneous PET-MRI acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. FORCARB2: An updated version of the U.S. Forest Carbon Budget Model

    Treesearch

    Linda S. Heath; Michael C. Nichols; James E. Smith; John R. Mills

    2010-01-01

    FORCARB2, an updated version of the U.S. FORest CARBon Budget Model (FORCARB), produces estimates of carbon stocks and stock changes for forest ecosystems and forest products at 5-year intervals. FORCARB2 includes a new methodology for carbon in harvested wood products, updated initial inventory data, a revised algorithm for dead wood, and now includes public forest...

  20. Transcranial Electrical Stimulation

    PubMed Central

    Fertonani, Anna; Miniussi, Carlo

    2016-01-01

    In recent years, there has been remarkable progress in the understanding and practical use of transcranial electrical stimulation (tES) techniques. Nevertheless, to date, this experimental effort has not been accompanied by substantial reflections on the models and mechanisms that could explain the stimulation effects. Given these premises, the aim of this article is to provide an updated picture of what we know about the theoretical models of tES that have been proposed to date, contextualized in a more specific and unitary framework. We demonstrate that these models can explain the tES behavioral effects as distributed along a continuum from stimulation dependent to network activity dependent. In this framework, we also propose that stochastic resonance is a useful mechanism to explain the general online neuromodulation effects of tES. Moreover, we highlight the aspects that should be considered in future research. We emphasize that tES is not an “easy-to-use” technique; however, it may represent a very fruitful approach if applied within rigorous protocols, with deep knowledge of both the behavioral and cognitive aspects and the more recent advances in the application of stimulation. PMID:26873962

  1. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    PubMed

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  2. Use of the 3D surgical modelling technique with open-source software for mandibular fibula free flap reconstruction and its surgical guides.

    PubMed

    Ganry, L; Hersant, B; Quilichini, J; Leyder, P; Meningaud, J P

    2017-06-01

    Tridimensional (3D) surgical modelling is a necessary step to create 3D-printed surgical tools, and expensive professional software is generally needed. Open-source software are functional, reliable, updated, may be downloaded for free and used to produce 3D models. Few surgical teams have used free solutions for mastering 3D surgical modelling for reconstructive surgery with osseous free flaps. We described an Open-source software 3D surgical modelling protocol to perform a fast and nearly free mandibular reconstruction with microvascular fibula free flap and its surgical guides, with no need for engineering support. Four successive specialised Open-source software were used to perform our 3D modelling: OsiriX ® , Meshlab ® , Netfabb ® and Blender ® . Digital Imaging and Communications in Medicine (DICOM) data on patient skull and fibula, obtained with a computerised tomography (CT) scan, were needed. The 3D modelling of the reconstructed mandible and its surgical guides were created. This new strategy may improve surgical management in Oral and Craniomaxillofacial surgery. Further clinical studies are needed to demonstrate the feasibility, reproducibility, transfer of know how and benefits of this technique. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  3. Highly efficient model updating for structural condition assessment of large-scale bridges.

    DOT National Transportation Integrated Search

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  4. UPDATE ON EPA'S URBAN WATERSHED MANAGEMENT BRANCH MODELING ACTIVITIES

    EPA Science Inventory

    This paper provides the Stormwater Management Model (SWMM) user community with a description of the Environmental Protection Agency (EPA's) Office of Research and Development (ORD) approach to urban watershed modeling research and provides an update on current ORD SWMM-related pr...

  5. Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong

    2018-02-01

    Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.

  6. Selective updating of working memory content modulates meso-cortico-striatal activity.

    PubMed

    Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S

    2011-08-01

    Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.

  7. Component corneal surgery: An update

    PubMed Central

    Maharana, Prafulla K.; Sahay, Pranita; Singhal, Deepali; Garg, Itika; Titiyal, Jeewan S.; Sharma, Namrata

    2017-01-01

    Several decades ago, penetrating keratoplasty was a challenge to corneal surgeons. Constant effort by the corneal surgeon to improve the outcomes as well as utilization of the available resources has led to a revolutionary change in the field of keratoplasty. All these efforts have led to the evolution of techniques that allow a corneal surgeon to disease-specific transplant of individual layers of corneal “so-called component corneal surgery” depending on the layer of cornea affected. This has led to an improvement in corneal graft survival as well as a better utilization of corneal tissues. This article reviews the currently available literature on component corneal surgeries and provides an update on the available techniques. PMID:28820150

  8. Clinical review: Lung imaging in acute respiratory distress syndrome patients - an update

    PubMed Central

    2013-01-01

    Over the past 30 years lung imaging has greatly contributed to the current understanding of the pathophysiology and the management of acute respiratory distress syndrome (ARDS). In the past few years, in addition to chest X-ray and lung computed tomography, newer functional lung imaging techniques, such as lung ultrasound, positron emission tomography, electrical impedance tomography and magnetic resonance, have been gaining a role as diagnostic tools to optimize lung assessment and ventilator management in ARDS patients. Here we provide an updated clinical review of lung imaging in ARDS over the past few years to offer an overview of the literature on the available imaging techniques from a clinical perspective. PMID:24238477

  9. Invited Review Article: Tip modification methods for tip-enhanced Raman spectroscopy (TERS) and colloidal probe technique: A 10 year update (2006-2016) review

    NASA Astrophysics Data System (ADS)

    Yuan, C. C.; Zhang, D.; Gan, Y.

    2017-03-01

    Engineering atomic force microscopy tips for reliable tip enhanced Raman spectroscopy (TERS) and colloidal probe technique are becoming routine practices in many labs. In this 10 year update review, various new tip modification methods developed over the past decade are briefly reviewed to help researchers select the appropriate method. The perspective is put in a large context to discuss the opportunities and challenges in this area, including novel combinations of seemingly different methods, potential applications of some methods which were not originally intended for TERS tip fabrication, and the problems of high cost and poor reproducibility of tip fabrication.

  10. Applications of multiple-constraint matrix updates to the optimal control of large structures

    NASA Technical Reports Server (NTRS)

    Smith, S. W.; Walcott, B. L.

    1992-01-01

    Low-authority control or vibration suppression in large, flexible space structures can be formulated as a linear feedback control problem requiring computation of displacement and velocity feedback gain matrices. To ensure stability in the uncontrolled modes, these gain matrices must be symmetric and positive definite. In this paper, efficient computation of symmetric, positive-definite feedback gain matrices is accomplished through the use of multiple-constraint matrix update techniques originally developed for structural identification applications. Two systems were used to illustrate the application: a simple spring-mass system and a planar truss. From these demonstrations, use of this multiple-constraint technique is seen to provide a straightforward approach for computing the low-authority gains.

  11. Electronically nonadiabatic wave packet propagation using frozen Gaussian scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kondorskiy, Alexey D., E-mail: kondor@sci.lebedev.ru; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp

    2015-09-21

    We present an approach, which allows to employ the adiabatic wave packet propagation technique and semiclassical theory to treat the nonadiabatic processes by using trajectory hopping. The approach developed generates a bunch of hopping trajectories and gives all additional information to incorporate the effect of nonadiabatic coupling into the wave packet dynamics. This provides an interface between a general adiabatic frozen Gaussian wave packet propagation method and the trajectory surface hopping technique. The basic idea suggested in [A. D. Kondorskiy and H. Nakamura, J. Chem. Phys. 120, 8937 (2004)] is revisited and complemented in the present work by the elaborationmore » of efficient numerical algorithms. We combine our approach with the adiabatic Herman-Kluk frozen Gaussian approximation. The efficiency and accuracy of the resulting method is demonstrated by applying it to popular benchmark model systems including three Tully’s models and 24D model of pyrazine. It is shown that photoabsorption spectrum is successfully reproduced by using a few hundreds of trajectories. We employ the compact finite difference Hessian update scheme to consider feasibility of the ab initio “on-the-fly” simulations. It is found that this technique allows us to obtain the reliable final results using several Hessian matrix calculations per trajectory.« less

  12. a Bottom-Up Geosptial Data Update Mechanism for Spatial Data Infrastructure Updating

    NASA Astrophysics Data System (ADS)

    Tian, W.; Zhu, X.; Liu, Y.

    2012-08-01

    Currently, the top-down spatial data update mechanism has made a big progress and it is wildly applied in many SDI (spatial data infrastructure). However, this mechanism still has some issues. For example, the update schedule is limited by the professional department's project, usually which is too long for the end-user; the data form collection to public cost too much time and energy for professional department; the details of geospatial information does not provide sufficient attribute, etc. Thus, how to deal with the problems has become the effective shortcut. Emerging Internet technology, 3S technique and geographic information knowledge which is popular in the public promote the booming development of geoscience in volunteered geospatial information. Volunteered geospatial information is the current "hotspot", which attracts many researchers to study its data quality and credibility, accuracy, sustainability, social benefit, application and so on. In addition to this, a few scholars also pay attention to the value of VGI to support the SDI updating. And on that basis, this paper presents a bottom-up update mechanism form VGI to SDI, which includes the processes of match homonymous elements between VGI and SDI vector data , change data detection, SDI spatial database update and new data product publication to end-users. Then, the proposed updating cycle is deeply discussed about the feasibility of which can detect the changed elements in time and shorten the update period, provide more accurate geometry and attribute data for spatial data infrastructure and support update propagation.

  13. Evaluation of a numerical model's ability to predict bed load transport observed in braided river experiments

    NASA Astrophysics Data System (ADS)

    Javernick, Luke; Redolfi, Marco; Bertoldi, Walter

    2018-05-01

    New data collection techniques offer numerical modelers the ability to gather and utilize high quality data sets with high spatial and temporal resolution. Such data sets are currently needed for calibration, verification, and to fuel future model development, particularly morphological simulations. This study explores the use of high quality spatial and temporal data sets of observed bed load transport in braided river flume experiments to evaluate the ability of a two-dimensional model, Delft3D, to predict bed load transport. This study uses a fixed bed model configuration and examines the model's shear stress calculations, which are the foundation to predict the sediment fluxes necessary for morphological simulations. The evaluation is conducted for three flow rates, and model setup used highly accurate Structure-from-Motion (SfM) topography and discharge boundary conditions. The model was hydraulically calibrated using bed roughness, and performance was evaluated based on depth and inundation agreement. Model bed load performance was evaluated in terms of critical shear stress exceedance area compared to maps of observed bed mobility in a flume. Following the standard hydraulic calibration, bed load performance was tested for sensitivity to horizontal eddy viscosity parameterization and bed morphology updating. Simulations produced depth errors equal to the SfM inherent errors, inundation agreement of 77-85%, and critical shear stress exceedance in agreement with 49-68% of the observed active area. This study provides insight into the ability of physically based, two-dimensional simulations to accurately predict bed load as well as the effects of horizontal eddy viscosity and bed updating. Further, this study highlights how using high spatial and temporal data to capture the physical processes at work during flume experiments can help to improve morphological modeling.

  14. Simulation and assimilation of satellite altimeter data at the oceanic mesoscale

    NASA Technical Reports Server (NTRS)

    Demay, P.; Robinson, A. R.

    1984-01-01

    An improved "objective analysis' technique is used along with an altimeter signal statistical model, an altimeter noise statistical model, an orbital model, and synoptic surface current maps in the POLYMODE-SDE area, to evaluate the performance of various observational strategies in catching the mesoscale variability at mid-latitudes. In particular, simulated repetitive nominal orbits of ERS-1, TOPEX, and SPOT/POSEIDON are examined. Results show the critical importance of existence of a subcycle, scanning in either direction. Moreover, long repeat cycles ( 20 days) and short cross-track distances ( 300 km) seem preferable, since they match mesoscale statistics. Another goal of the study is to prepare and discuss sea-surface height (SSH) assimilation in quasigeostrophic models. Restored SSH maps are shown to meet that purpose, if an efficient extrapolation method or deep in-situ data (floats) are used on the vertical to start and update the model.

  15. Boosting cooperation by involving extortion in spatial prisoner's dilemma games

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-Xi; Rong, Zhihai

    2014-12-01

    We study the evolution of cooperation in spatial prisoner's dilemma games with and without extortion by adopting the aspiration-driven strategy updating rule. We focus explicitly on how the strategy updating manner (whether synchronous or asynchronous) and also the introduction of extortion strategy affect the collective outcome of the games. By means of Monte Carlo simulations as well as dynamical cluster techniques, we find that the involvement of extortioners facilitates the boom of cooperators in the population (and whom can always dominate the population if the temptation to defect is not too large) for both synchronous and asynchronous strategy updating, in stark contrast to the other case, where cooperation is promoted for an intermediate aspiration level with synchronous strategy updating, but is remarkably inhibited if the strategy updating is implemented asynchronously. We explain the results by configurational analysis and find that the presence of extortion leads to the checkerboard-like ordering of cooperators and extortioners, which enable cooperators to prevail in the population with both strategy updating manners. Moreover, extortion itself is evolutionary stable, and therefore acts as the incubator for the evolution of cooperation.

  16. Aircraft Mishap Exercise at SLF

    NASA Image and Video Library

    2018-02-14

    Members of NASA Kennedy Space Center's Flight Operations team participate in a rehearsal of a helicopter crash-landing to test new and updated emergency procedures. Called the Aircraft Mishap Preparedness and Contingency Plan, the operation was designed to validate several updated techniques the center's first responders would follow, should they ever need to rescue a crew in case of a real accident. The mishap exercise took place at the center's Shuttle Landing Facility.

  17. Aircraft Mishap Exercise at SLF

    NASA Image and Video Library

    2018-02-14

    NASA Kennedy Space Center's Flight Operations team reviews procedures before beginning a rehearsal of a helicopter crash-landing to test new and updated emergency procedures. Called the Aircraft Mishap Preparedness and Contingency Plan, the operation was designed to validate several updated techniques the center's first responders would follow, should they ever need to rescue a crew in case of a real accident. The mishap exercise took place at the center's Shuttle Landing Facility.

  18. Aircraft Mishap Exercise at SLF

    NASA Image and Video Library

    2018-02-14

    Members of NASA Kennedy Space Center's Flight Operations team prepare for a rehearsal of a helicopter crash-landing to test new and updated emergency procedures. Called the Aircraft Mishap Preparedness and Contingency Plan, the operation was designed to validate several updated techniques the center's first responders would follow, should they ever need to rescue a crew in case of a real accident. The mishap exercise took place at the center's Shuttle Landing Facility.

  19. Aircraft Mishap Exercise at SLF

    NASA Image and Video Library

    2018-02-14

    A member of NASA Kennedy Space Center's Flight Operations team prepares for a rehearsal of a helicopter crash-landing to test new and updated emergency procedures. Called the Aircraft Mishap Preparedness and Contingency Plan, the operation was designed to validate several updated techniques the center's first responders would follow, should they ever need to rescue a crew in case of a real accident. The mishap exercise took place at the center's Shuttle Landing Facility.

  20. Performing and updating an inventory of Oregon's expanding irrigated agricultural lands utilizing remote sensing technology

    NASA Technical Reports Server (NTRS)

    Hall, M. J.

    1981-01-01

    An inventory technique based upon using remote sensing technology, interpreting both high altitude aerial photography and LANDSAT multispectral scanner imagery, is discussed. It is noted that once the final land use inventory maps of irrigated agricultural lands are available and approximately scaled they may be overlaid directly onto either multispectral scanner or return beam vidicon prints, thereby providing an inexpensive updating procedure.

  1. An Iterative Interplanetary Scintillation (IPS) Analysis Using Time-dependent 3-D MHD Models as Kernels

    NASA Astrophysics Data System (ADS)

    Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.

    2017-12-01

    The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.

  2. SysML model of exoplanet archive functionality and activities

    NASA Astrophysics Data System (ADS)

    Ramirez, Solange

    2016-08-01

    The NASA Exoplanet Archive is an online service that serves data and information on exoplanets and their host stars to help astronomical research related to search for and characterization of extra-solar planetary systems. In order to provide the most up to date data sets to the users, the exoplanet archive performs weekly updates that include additions into the database and updates to the services as needed. These weekly updates are complex due to interfaces within the archive. I will be presenting a SysML model that helps us perform these update activities in a weekly basis.

  3. Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2016-10-01

    This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.

  4. Electronic Education System Model-2

    ERIC Educational Resources Information Center

    Güllü, Fatih; Kuusik, Rein; Laanpere, Mart

    2015-01-01

    In this study we presented new EES Model-2 extended from EES model for more productive implementation in e-learning process design and modelling in higher education. The most updates were related to uppermost instructional layer. We updated learning processes object of the layer for adaptation of educational process for young and old people,…

  5. Capital update factor: a new era approaches.

    PubMed

    Grimaldi, P L

    1993-02-01

    The Health Care Financing Administration (HCFA) has constructed a preliminary model of a new capital update method which is consistent with the framework being developed to refine the update method for PPS operating costs. HCFA's eventual goal is to develop a single update framework for operating and capital costs. Initial results suggest that adopting the new capital update method would reduce capital payments substantially, which might intensify creditor's concerns about extending loans to hospitals.

  6. HARMONIZATION OF HUMAN AND ECOLOGICAL HEALTH ASSESSMENT: DEVELOPMENT OF BAYESIAN UPDATING TECHNIQUES TO INCORPORATE MECHANISTIC INFORMATION ACROSS SPECIES

    EPA Science Inventory

    Bayesian statistical techniques have proven useful in clinical and environmental epidemiological applications to evaluate and integrate available information, and in regulatory applications such as the National Ambient Air Quality Assessment for Nitrogen Oxides. A recent special...

  7. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    PubMed

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  8. Proposed reporting model update creates dialogue between FASB and not-for-profits.

    PubMed

    Mosrie, Norman C

    2016-04-01

    Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.

  9. Evaluating Non-In-Place Update Techniques for Flash-Based Transaction Processing Systems

    NASA Astrophysics Data System (ADS)

    Wang, Yongkun; Goda, Kazuo; Kitsuregawa, Masaru

    Recently, flash memory is emerging as the storage device. With price sliding fast, the cost per capacity is approaching to that of SATA disk drives. So far flash memory has been widely deployed in consumer electronics even partly in mobile computing environments. For enterprise systems, the deployment has been studied by many researchers and developers. In terms of the access performance characteristics, flash memory is quite different from disk drives. Without the mechanical components, flash memory has very high random read performance, whereas it has a limited random write performance because of the erase-before-write design. The random write performance of flash memory is comparable with or even worse than that of disk drives. Due to such a performance asymmetry, naive deployment to enterprise systems may not exploit the potential performance of flash memory at full blast. This paper studies the effectiveness of using non-in-place-update (NIPU) techniques through the IO path of flash-based transaction processing systems. Our deliberate experiments using both open-source DBMS and commercial DBMS validated the potential benefits; x3.0 to x6.6 performance improvement was confirmed by incorporating non-in-place-update techniques into file system without any modification of applications or storage devices.

  10. Vibration analysis of resistance spot welding joint for dissimilar plate structure (mild steel 1010 and stainless steel 304)

    NASA Astrophysics Data System (ADS)

    Sani, M. S. M.; Nazri, N. A.; Alawi, D. A. J.

    2017-09-01

    Resistance spot welding (RSW) is a proficient joining method commonly used for sheet metal joining and become one of the oldest spot welding processes use in industry especially in the automotive. RSW involves the application of heat and pressure without neglecting time taken when joining two or more metal sheets at a localized area which is claimed as the most efficient welding process in metal fabrication. The purpose of this project is to perform model updating of RSW plate structure between mild steel 1010 and stainless steel 304. In order to do the updating, normal mode finite element analysis (FEA) and experimental modal analysis (EMA) have been carried out. Result shows that the discrepancies of natural frequency between FEA and EMA are below than 10 %. Sensitivity model updating is evaluated in order to make sure which parameters are influences in this structural dynamic modification. Young’s modulus and density both materials are indicate significant parameters to do model updating. As a conclusion, after perform model updating, total average error of dissimilar RSW plate is improved significantly.

  11. NODA for EPA's Updated Ozone Transport Modeling

    EPA Pesticide Factsheets

    Find EPA's NODA for the Updated Ozone Transport Modeling Data for the 2008 Ozone National Ambient Air Quality Standard (NAAQS) along with the ExitExtension of Public Comment Period on CSAPR for the 2008 NAAQS.

  12. An efficient technique for the numerical solution of the bidomain equations.

    PubMed

    Whiteley, Jonathan P

    2008-08-01

    Computing the numerical solution of the bidomain equations is widely accepted to be a significant computational challenge. In this study we extend a previously published semi-implicit numerical scheme with good stability properties that has been used to solve the bidomain equations (Whiteley, J.P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006). A new, efficient numerical scheme is developed which utilizes the observation that the only component of the ionic current that must be calculated on a fine spatial mesh and updated frequently is the fast sodium current. Other components of the ionic current may be calculated on a coarser mesh and updated less frequently, and then interpolated onto the finer mesh. Use of this technique to calculate the transmembrane potential and extracellular potential induces very little error in the solution. For the simulations presented in this study an increase in computational efficiency of over two orders of magnitude over standard numerical techniques is obtained.

  13. Indoor seismology by probing the Earth's interior by using sound velocity measurements at high pressures and temperatures.

    PubMed

    Li, Baosheng; Liebermann, Robert C

    2007-05-29

    The adiabatic bulk (K(S)) and shear (G) moduli of mantle materials at high pressure and temperature can be obtained directly by measuring compressional and shear wave velocities in the laboratory with experimental techniques based on physical acoustics. We present the application of the current state-of-the-art experimental techniques by using ultrasonic interferometry in conjunction with synchrotron x radiation to study the elasticity of olivine and pyroxenes and their high-pressure phases. By using these updated thermoelasticity data for these phases, velocity and density profiles for a pyrolite model are constructed and compared with radial seismic models. We conclude that pyrolite provides an adequate explanation of the major seismic discontinuities at 410- and 660-km depths, the gradient in the transition zone, as well as the velocities in the lower mantle, if the uncertainties in the modeling and the variations in different seismic models are considered. The characteristics of the seismic scaling factors in response to thermal anomalies suggest that anticorrelations between bulk sound and shear wave velocities, as well as the large positive density anomalies observed in the lower mantle, cannot be explained fully without invoking chemical variations.

  14. Update to the USDA-ARS fixed-wing spray nozzle models

    USDA-ARS?s Scientific Manuscript database

    The current USDA ARS Aerial Spray Nozzle Models were updated to reflect both new standardized measurement methods and systems, as well as, to increase operational spray pressure, aircraft airspeed and nozzle orientation angle limits. The new models were developed using both Central Composite Design...

  15. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715

    2014-11-28

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less

  16. Studying the Microanatomy of the Heart in Three Dimensions: A Practical Update

    PubMed Central

    Jarvis, Jonathan C.; Stephenson, Robert

    2013-01-01

    The structure and function of the heart needs to be understood in three dimensions. We give a brief historical summary of the methods by which such an understanding has been sought, and some practical details of the relatively new technique of micro-CT with iodine contrast enhancement in samples from rat and rabbit. We discuss how the improved anatomical detail available in fixed cadaveric hearts will enhance our ability to model and to understand the integrated function of the cardiomyocytes, conducting tissues, and fibrous supporting structures that generate the pumping function of the heart. PMID:24400272

  17. Neural networks for function approximation in nonlinear control

    NASA Technical Reports Server (NTRS)

    Linse, Dennis J.; Stengel, Robert F.

    1990-01-01

    Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.

  18. Local neighborhood transition probability estimation and its use in contextual classification

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of incorporating spatial or contextual information into classifications is considered. A simple model that describes the spatial dependencies between the neighboring pixels with a single parameter, Theta, is presented. Expressions are derived for updating the posteriori probabilities of the states of nature of the pattern under consideration using information from the neighboring patterns, both for spatially uniform context and for Markov dependencies in terms of Theta. Techniques for obtaining the optimal value of the parameter Theta as a maximum likelihood estimate from the local neighborhood of the pattern under consideration are developed.

  19. Determination of replicate composite bone material properties using modal analysis.

    PubMed

    Leuridan, Steven; Goossens, Quentin; Pastrav, Leonard; Roosen, Jorg; Mulier, Michiel; Denis, Kathleen; Desmet, Wim; Sloten, Jos Vander

    2017-02-01

    Replicate composite bones are used extensively for in vitro testing of new orthopedic devices. Contrary to tests with cadaveric bone material, which inherently exhibits large variability, they offer a standardized alternative with limited variability. Accurate knowledge of the composite's material properties is important when interpreting in vitro test results and when using them in FE models of biomechanical constructs. The cortical bone analogue material properties of three different fourth-generation composite bone models were determined by updating FE bone models using experimental and numerical modal analyses results. The influence of the cortical bone analogue material model (isotropic or transversely isotropic) and the inter- and intra-specimen variability were assessed. Isotropic cortical bone analogue material models failed to represent the experimental behavior in a satisfactory way even after updating the elastic material constants. When transversely isotropic material models were used, the updating procedure resulted in a reduction of the longitudinal Young's modulus from 16.00GPa before updating to an average of 13.96 GPa after updating. The shear modulus was increased from 3.30GPa to an average value of 3.92GPa. The transverse Young's modulus was lowered from an initial value of 10.00GPa to 9.89GPa. Low inter- and intra-specimen variability was found. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. 'DIRTMAP2': Dust and Palaeoclimate.

    NASA Astrophysics Data System (ADS)

    Maher, B.

    2008-12-01

    The influence of dust on climate, through changes in the radiative properties of the atmosphere and/or the CO2 content of the oceans and atmosphere (through iron fertilisation of high nutrient, low chlorophyll, HNLC, regions of the world's oceans), remains a poorly quantified and actively changing element of the Earth's climate system. Dust-cycle models presently employ a relatively simple representation of dust properties; these simplifications may severely limit the realism of simulations of the impact of changes in dust loading on either or both radiative forcing and biogeochemical cycling. Further, whilst state-of-the-art models achieve reasonable estimates of dust deposition in the far-field (i.e. at ocean locations), they under-estimate - by an order of magnitude - levels of dust deposition over the continents, unless glacigenic dust production is explicitly and spatially represented. The 'DIRTMAP2' working group aims to address these problems directly, through a series of explicitly interacting contributions from the international modelling and palaeo-data communities. A key aim of the project is to produce an updated version of the DIRTMAP database ('DIRTMAP2'), incorporating (a) records and age models newly available since ~ 2001, (b) longer records, and especially high-resolution records, that will target time windows also focused on by other international research programs (e.g. DO8/9, MIS5), (c) metadata to allow quality-control issues to be dealt with objectively, (d) information on mineralogy and isotopes relevant to provenancing, radiative forcing and iron bioavailability, and (e) enhanced characterisation of the aeolian component of existing records. This update will be coordinated with work (led by Karen Kohfeld) to expand the DIRTMAP database to incorporate information on marine productivity and improved sedimentation rate estimation techniques. It will also build upon a recently-developed dust model evaluation tool for current climate (e.g. Miller et al. 2006) to enable application of this and other evaluative models to palaeoclimate simulations. We invite colleagues to contribute to this update; the DIRTMAP2 database will shortly be accessible from the University of Lancaster website.

  1. The AFIS tree growth model for updating annual forest inventories in Minnesota

    Treesearch

    Margaret R. Holdaway

    2000-01-01

    As the Forest Service moves towards annual inventories, states may use model predictions of growth to update unmeasured plots. A tree growth model (AFIS) based on the scaled Weibull function and using the average-adjusted model form is presented. Annual diameter growth for four species was modeled using undisturbed plots from Minnesota's Aspen-Birch and Northern...

  2. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  3. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.

    2011-04-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.

  4. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brouwer, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hackett, B.; Verlaan, M.; Fanjul, E. A.

    2012-03-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of several storm surge or circulation models and near-real time tide gauge data in the region, with the following main goals: 1. providing easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool; 2. generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average technique (BMA). The Bayesian Model Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the Bayesian likelihood that a model will give the correct forecast and are continuously updated based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. The system was implemented for the European Atlantic facade (IBIROOS region) and Western Mediterranean coast based on the MATROOS visualization tool developed by Deltares. Results of validation of the different models and BMA implementation for the main harbours are presented for these regions where this kind of activity is performed for the first time. The system is currently operational at Puertos del Estado and has proved to be useful in the detection of calibration problems in some of the circulation models, in the identification of the systematic differences between baroclinic and barotropic models for sea level forecasts and to demonstrate the feasibility of providing an overall probabilistic forecast, based on the BMA method.

  5. A recursive linear predictive vocoder

    NASA Astrophysics Data System (ADS)

    Janssen, W. A.

    1983-12-01

    A non-real time 10 pole recursive autocorrelation linear predictive coding vocoder was created for use in studying effects of recursive autocorrelation on speech. The vocoder is composed of two interchangeable pitch detectors, a speech analyzer, and speech synthesizer. The time between updating filter coefficients is allowed to vary from .125 msec to 20 msec. The best quality was found using .125 msec between each update. The greatest change in quality was noted when changing from 20 msec/update to 10 msec/update. Pitch period plots for the center clipping autocorrelation pitch detector and simplified inverse filtering technique are provided. Plots of speech into and out of the vocoder are given. Formant versus time three dimensional plots are shown. Effects of noise on pitch detection and formants are shown. Noise effects the voiced/unvoiced decision process causing voiced speech to be re-constructed as unvoiced.

  6. Mosquito Control Techniques Developed for the US Military and an Update on the AMCA

    USDA-ARS?s Scientific Manuscript database

    Scientists at the USDA Center for Medical, Agricultural and Veterinary Entomology developed and field tested novel techniques to protect deployed military troops from diseases transmitted by mosquitoes and sand flies. Methods that proved to be very effective included (1) novel military personal prot...

  7. NRL/VOA Modifications to IONCAP as of 12 July 1988

    DTIC Science & Technology

    1989-08-02

    suitable for wide-area coverage studies), to incorporate a newer noise model , to improve the accuracy of some calculations, to correct a few...with IONANT ............................................................... 13 C. Incorporation of an Updated Noise Model into IONCAP...LISTINGS OF FOUR IONCAP SUBROUTINES SUPPORTING THE UPDATED NOISE MODEL ................................................................... 42 VI. LISTING

  8. Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7

    EPA Science Inventory

    This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...

  9. Enumeration and extension of non-equivalent deterministic update schedules in Boolean networks.

    PubMed

    Palma, Eduardo; Salinas, Lilian; Aracena, Julio

    2016-03-01

    Boolean networks (BNs) are commonly used to model genetic regulatory networks (GRNs). Due to the sensibility of the dynamical behavior to changes in the updating scheme (order in which the nodes of a network update their state values), it is increasingly common to use different updating rules in the modeling of GRNs to better capture an observed biological phenomenon and thus to obtain more realistic models.In Aracena et al. equivalence classes of deterministic update schedules in BNs, that yield exactly the same dynamical behavior of the network, were defined according to a certain label function on the arcs of the interaction digraph defined for each scheme. Thus, the interaction digraph so labeled (update digraphs) encode the non-equivalent schemes. We address the problem of enumerating all non-equivalent deterministic update schedules of a given BN. First, we show that it is an intractable problem in general. To solve it, we first construct an algorithm that determines the set of update digraphs of a BN. For that, we use divide and conquer methodology based on the structural characteristics of the interaction digraph. Next, for each update digraph we determine a scheme associated. This algorithm also works in the case where there is a partial knowledge about the relative order of the updating of the states of the nodes. We exhibit some examples of how the algorithm works on some GRNs published in the literature. An executable file of the UpdateLabel algorithm made in Java and the files with the outputs of the algorithms used with the GRNs are available at: www.inf.udec.cl/ ∼lilian/UDE/ CONTACT: lilisalinas@udec.cl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Single-Trial Event-Related Potential Correlates of Belief Updating

    PubMed Central

    Murawski, Carsten; Bode, Stefan

    2015-01-01

    Abstract Belief updating—the process by which an agent alters an internal model of its environment—is a core function of the CNS. Recent theory has proposed broad principles by which belief updating might operate, but more precise details of its implementation in the human brain remain unclear. In order to address this question, we studied how two components of the human event-related potential encoded different aspects of belief updating. Participants completed a novel perceptual learning task while electroencephalography was recorded. Participants learned the mapping between the contrast of a dynamic visual stimulus and a monetary reward and updated their beliefs about a target contrast on each trial. A Bayesian computational model was formulated to estimate belief states at each trial and was used to quantify the following two variables: belief update size and belief uncertainty. Robust single-trial regression was used to assess how these model-derived variables were related to the amplitudes of the P3 and the stimulus-preceding negativity (SPN), respectively. Results showed a positive relationship between belief update size and P3 amplitude at one fronto-central electrode, and a negative relationship between SPN amplitude and belief uncertainty at a left central and a right parietal electrode. These results provide evidence that belief update size and belief uncertainty have distinct neural signatures that can be tracked in single trials in specific ERP components. This, in turn, provides evidence that the cognitive mechanisms underlying belief updating in humans can be described well within a Bayesian framework. PMID:26473170

  11. Overview and Evaluation of the Community Multiscale Air Quality (CMAQ) Modeling System Version 5.2

    EPA Science Inventory

    A new version of the Community Multiscale Air Quality (CMAQ) model, version 5.2 (CMAQv5.2), is currently being developed, with a planned release date in 2017. The new model includes numerous updates from the previous version of the model (CMAQv5.1). Specific updates include a new...

  12. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements

    PubMed Central

    Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas

    2016-01-01

    In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452

  13. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    NASA Astrophysics Data System (ADS)

    Gantt, B.; Kelly, J. T.; Bash, J. O.

    2015-11-01

    Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the fine-mode size distribution, include sea surface temperature (SST) dependency, and reduce surf-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several coastal and national observational data sets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for coastal sites in the Bay Regional Atmospheric Chemistry Experiment (BRACE) near Tampa, Florida. Including SST dependency to the SSA emission parameterization led to increased sodium concentrations in the southeastern US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex) study period resulted in a modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This update of SSA emissions enabled a more realistic simulation of the atmospheric chemistry in coastal environments where marine air mixes with urban pollution.

  14. Wisconsin's forest statistics, 1987: an inventory update.

    Treesearch

    W. Brad Smith; Jerold T. Hahn

    1989-01-01

    The Wisconsin 1987 inventory update, derived by using tree growth models, reports 14.7 million acres of timberland, a decline of less than 1% since 1983. This bulletin presents findings from the inventory update in tables detailing timberland area, volume, and biomass.

  15. Stabilizing Motifs in Autonomous Boolean Networks and the Yeast Cell Cycle Oscillator

    NASA Astrophysics Data System (ADS)

    Sevim, Volkan; Gong, Xinwei; Socolar, Joshua

    2009-03-01

    Synchronously updated Boolean networks are widely used to model gene regulation. Some properties of these model networks are known to be artifacts of the clocking in the update scheme. Autonomous updating is a less artificial scheme that allows one to introduce small timing perturbations and study stability of the attractors. We argue that the stabilization of a limit cycle in an autonomous Boolean network requires a combination of motifs such as feed-forward loops and auto-repressive links that can correct small fluctuations in the timing of switching events. A recently published model of the transcriptional cell-cycle oscillator in yeast contains the motifs necessary for stability under autonomous updating [1]. [1] D. A. Orlando, et al. Nature (London), 4530 (7197):0 944--947, 2008.

  16. Mathematical and Computational Modeling in Complex Biological Systems

    PubMed Central

    Li, Wenyang; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558

  17. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  18. Mathematical and Computational Modeling in Complex Biological Systems.

    PubMed

    Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.

  19. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  20. An updated rate-of-spread clock

    USGS Publications Warehouse

    Kolaks, Jeremy; Grabner, Keith W.; Hartman, George; Cutter, Bruce E.; Loewenstein, Edward F.

    2005-01-01

    Several years ago, Blank and Simard (1983) described an electronic timer, frequently referred to as a rate-of-spread (ROS) clock—a relatively simple instrument used in measuring fire spread. Although other techniques for measuring rate of spread are available (such as data loggers), the basic ROS clock remains a valuable and relatively inexpensive tool. However, several items described in the original article have changed. Therefore, we are describing an updated version of the ROS clock.

  1. Aircraft Mishap Exercise at SLF

    NASA Image and Video Library

    2018-02-14

    An Aircraft Mishap Preparedness and Contingency Plan is underway at the Shuttle Landing Facility at NASA's Kennedy Space Center in Florida. The center's Flight Operations rehearsed a helicopter crash-landing to test new and updated emergency procedures. The operation was designed to validate several updated techniques the center's first responders would follow, should they ever need to rescue a crew in case of a real accident. The mishap exercise took place at the center's Shuttle Landing Facility.

  2. An update of commercial infrared sensing and imaging instruments

    NASA Technical Reports Server (NTRS)

    Kaplan, Herbert

    1989-01-01

    A classification of infrared sensing instruments by type and application, listing commercially available instruments, from single point thermal probes to on-line control sensors, to high speed, high resolution imaging systems is given. A review of performance specifications follows, along with a discussion of typical thermographic display approaches utilized by various imager manufacturers. An update report on new instruments, new display techniques and newly introduced features of existing instruments is given.

  3. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  4. MENA 1.1 - An Updated Geophysical Regionalization of the Middle East and North Africa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walters, B.; Pasyanos, M.E.; Bhattacharyya, J.

    2000-03-01

    This short report provides an update to the earlier LLNL paper entitled ''Preliminary Definition of Geophysical Regions for the Middle East and North Africa'' (Sweeney and Walter, 1998). This report is designed to be used in combination with that earlier paper. The reader is referred to Sweeney and Walter (1998) for all details, including definitions, references, uses, shortcomings, etc., of the regionalization process. In this report we will discuss only those regions in which we have changed the boundaries or velocity structure from that given by the original paper. The paper by Sweeney and Walter (1998) drew on a varietymore » of sources to estimate a preliminary, first-order regionalization of the Middle East and North Africa (MENA), providing regional boundaries and velocity models within each region. The model attempts to properly account for major structural discontinuities and significant crustal thickness and velocity variations on a gross scale. The model can be used to extrapolate sparse calibration data within a distinct geophysical region. This model can also serve as a background model in the process of forming station calibration maps using intelligent interpolation techniques such as kriging, extending the calibration into aseismic areas. Such station maps can greatly improve the ability to locate and identify seismic events, which in turn improves the ability to seismically monitor for underground nuclear testing. The original model from Sweeney and Walter (1998) was digitized to a 1{sup o} resolution, for simplicity we will hereafter refer to this model as MENA 1.0. The new model described here has also been digitized to a 1{sup o} resolution and will be referred to as MENA1.1 throughout this report.« less

  5. Impact of different satellite soil moisture products on the predictions of a continuous distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.

    2016-06-01

    The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.

  6. Updated aerosol module and its application to simulate secondary organic aerosols during IMPACT campaign May 2008

    NASA Astrophysics Data System (ADS)

    Li, Y. P.; Elbern, H.; Lu, K. D.; Friese, E.; Kiendler-Scharr, A.; Mentel, Th. F.; Wang, X. S.; Wahner, A.; Zhang, Y. H.

    2013-03-01

    The formation of Secondary organic aerosol (SOA) was simulated with the Secondary ORGanic Aerosol Model (SORGAM) by a classical gas-particle partitioning concept, using the two-product model approach, which is widely used in chemical transport models. In this study, we extensively updated SORGAM including three major modifications: firstly, we derived temperature dependence functions of the SOA yields for aromatics and biogenic VOCs, based on recent chamber studies within a sophisticated mathematic optimization framework; secondly, we implemented the SOA formation pathways from photo oxidation (OH initiated) of isoprene; thirdly, we implemented the SOA formation channel from NO3-initiated oxidation of reactive biogenic hydrocarbons (isoprene and monoterpenes). The temperature dependence functions of the SOA yields were validated against available chamber experiments. Moreover, the whole updated SORGAM module was validated against ambient SOA observations represented by the summed oxygenated organic aerosol (OOA) concentrations abstracted from Aerosol Mass Spectrometer (AMS) measurements at a rural site near Rotterdam, the Netherlands, performed during the IMPACT campaign in May 2008. In this case, we embedded both the original and the updated SORGAM module into the EURopean Air pollution and Dispersion-Inverse Model (EURAD-IM), which showed general good agreements with the observed meteorological parameters and several secondary products such as O3, sulfate and nitrate. With the updated SORGAM module, the EURAD-IM model also captured the observed SOA concentrations reasonably well especially those during nighttime. In contrast, the EURAD-IM model before update underestimated the observations by a factor of up to 5. The large improvements of the modeled SOA concentrations by updated SORGAM were attributed to the mentioned three modifications. Embedding the temperature dependence functions of the SOA yields, including the new pathways from isoprene photo oxidations, and switching on the SOA formation from NO3 initiated biogenic VOCs oxidations contributed to this enhancement by 10%, 22% and 47%, respectively. However, the EURAD-IM model with updated SORGAM still clearly underestimated the afternoon SOA observations up to a factor of two. More work such as to improve the simulated OH concentrations under high VOCs and low NOx concentrations, further including the SOA formation from semi-volatile organic compounds, the correct aging process of aerosols, oligomerization process and the influence on the biogenic SOA by the anthropogenic SOA, are still required to fill the gap.

  7. Crucial role of strategy updating for coexistence of strategies in interaction networks.

    PubMed

    Zhang, Jianlei; Zhang, Chunyan; Cao, Ming; Weissing, Franz J

    2015-04-01

    Network models are useful tools for studying the dynamics of social interactions in a structured population. After a round of interactions with the players in their local neighborhood, players update their strategy based on the comparison of their own payoff with the payoff of one of their neighbors. Here we show that the assumptions made on strategy updating are of crucial importance for the strategy dynamics. In the first step, we demonstrate that seemingly small deviations from the standard assumptions on updating have major implications for the evolutionary outcome of two cooperation games: cooperation can more easily persist in a Prisoner's Dilemma game, while it can go more easily extinct in a Snowdrift game. To explain these outcomes, we develop a general model for the updating of states in a network that allows us to derive conditions for the steady-state coexistence of states (or strategies). The analysis reveals that coexistence crucially depends on the number of agents consulted for updating. We conclude that updating rules are as important for evolution on a network as network structure and the nature of the interaction.

  8. Crucial role of strategy updating for coexistence of strategies in interaction networks

    NASA Astrophysics Data System (ADS)

    Zhang, Jianlei; Zhang, Chunyan; Cao, Ming; Weissing, Franz J.

    2015-04-01

    Network models are useful tools for studying the dynamics of social interactions in a structured population. After a round of interactions with the players in their local neighborhood, players update their strategy based on the comparison of their own payoff with the payoff of one of their neighbors. Here we show that the assumptions made on strategy updating are of crucial importance for the strategy dynamics. In the first step, we demonstrate that seemingly small deviations from the standard assumptions on updating have major implications for the evolutionary outcome of two cooperation games: cooperation can more easily persist in a Prisoner's Dilemma game, while it can go more easily extinct in a Snowdrift game. To explain these outcomes, we develop a general model for the updating of states in a network that allows us to derive conditions for the steady-state coexistence of states (or strategies). The analysis reveals that coexistence crucially depends on the number of agents consulted for updating. We conclude that updating rules are as important for evolution on a network as network structure and the nature of the interaction.

  9. Computer-assisted expert case definition in electronic health records.

    PubMed

    Walker, Alexander M; Zhou, Xiaofeng; Ananthakrishnan, Ashwin N; Weiss, Lisa S; Shen, Rongjun; Sobel, Rachel E; Bate, Andrew; Reynolds, Robert F

    2016-02-01

    To describe how computer-assisted presentation of case data can lead experts to infer machine-implementable rules for case definition in electronic health records. As an illustration the technique has been applied to obtain a definition of acute liver dysfunction (ALD) in persons with inflammatory bowel disease (IBD). The technique consists of repeatedly sampling new batches of case candidates from an enriched pool of persons meeting presumed minimal inclusion criteria, classifying the candidates by a machine-implementable candidate rule and by a human expert, and then updating the rule so that it captures new distinctions introduced by the expert. Iteration continues until an update results in an acceptably small number of changes to form a final case definition. The technique was applied to structured data and terms derived by natural language processing from text records in 29,336 adults with IBD. Over three rounds the technique led to rules with increasing predictive value, as the experts identified exceptions, and increasing sensitivity, as the experts identified missing inclusion criteria. In the final rule inclusion and exclusion terms were often keyed to an ALD onset date. When compared against clinical review in an independent test round, the derived final case definition had a sensitivity of 92% and a positive predictive value of 79%. An iterative technique of machine-supported expert review can yield a case definition that accommodates available data, incorporates pre-existing medical knowledge, is transparent and is open to continuous improvement. The expert updates to rules may be informative in themselves. In this limited setting, the final case definition for ALD performed better than previous, published attempts using expert definitions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. The Effects of Surgical Hand Scrubbing Protocols on Skin Integrity and Surgical Site Infection Rates: A Systematic Review.

    PubMed

    Liu, Liang Qin; Mehigan, Sinead

    2016-05-01

    This systematic review aimed to critically appraise and synthesize updated evidence regarding the effect of surgical-scrub techniques on skin integrity and the incidence of surgical site infections. Databases searched include the Cumulative Index to Nursing and Allied Health Literature, MEDLINE, Embase, and Cochrane Central. Our review was limited to eight peer-reviewed, randomized controlled trials and two nonrandomized controlled trials published in English from 1990 to 2015. Comparison models included traditional hand scrubbing with chlorhexidine gluconate or povidone-iodine against alcohol-based hand rubbing, scrubbing with a brush versus without a brush, and detergent-based antiseptics alone versus antiseptics incorporating alcohol solutions. Evidence showed that hand rubbing techniques are as effective as traditional scrubbing and seem to be better tolerated. Hand rubbing appears to cause less skin damage than traditional scrub protocols, and scrub personnel tolerated brushless techniques better than scrubbing using a brush. Copyright © 2016 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  11. High-speed real-time animated displays on the ADAGE (trademark) RDS 3000 raster graphics system

    NASA Technical Reports Server (NTRS)

    Kahlbaum, William M., Jr.; Ownbey, Katrina L.

    1989-01-01

    Techniques which may be used to increase the animation update rate of real-time computer raster graphic displays are discussed. They were developed on the ADAGE RDS 3000 graphic system in support of the Advanced Concepts Simulator at the NASA Langley Research Center. These techniques involve the use of a special purpose parallel processor, for high-speed character generation. The description of the parallel processor includes the Barrel Shifter which is part of the hardware and is the key to the high-speed character rendition. The final result of this total effort was a fourfold increase in the update rate of an existing primary flight display from 4 to 16 frames per second.

  12. Adaptive optics scanning laser ophthalmoscopy in fundus imaging, a review and update.

    PubMed

    Zhang, Bing; Li, Ni; Kang, Jie; He, Yi; Chen, Xiao-Ming

    2017-01-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics (AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods (fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques (adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography). Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors (cones and rods), fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future.

  13. Natural language processing and inference rules as strategies for updating problem list in an electronic health record.

    PubMed

    Plazzotta, Fernando; Otero, Carlos; Luna, Daniel; de Quiros, Fernan Gonzalez Bernaldo

    2013-01-01

    Physicians do not always keep the problem list accurate, complete and updated. To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs. Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list. NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.

  14. Adaptive optics scanning laser ophthalmoscopy in fundus imaging, a review and update

    PubMed Central

    Zhang, Bing; Li, Ni; Kang, Jie; He, Yi; Chen, Xiao-Ming

    2017-01-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has been a promising technique in funds imaging with growing popularity. This review firstly gives a brief history of adaptive optics (AO) and AO-SLO. Then it compares AO-SLO with conventional imaging methods (fundus fluorescein angiography, fundus autofluorescence, indocyanine green angiography and optical coherence tomography) and other AO techniques (adaptive optics flood-illumination ophthalmoscopy and adaptive optics optical coherence tomography). Furthermore, an update of current research situation in AO-SLO is made based on different fundus structures as photoreceptors (cones and rods), fundus vessels, retinal pigment epithelium layer, retinal nerve fiber layer, ganglion cell layer and lamina cribrosa. Finally, this review indicates possible research directions of AO-SLO in future. PMID:29181321

  15. Chemical transport model simulations of organic aerosol in southern California: model evaluation and gasoline and diesel source contributions

    NASA Astrophysics Data System (ADS)

    Jathar, Shantanu H.; Woody, Matthew; Pye, Havala O. T.; Baker, Kirk R.; Robinson, Allen L.

    2017-03-01

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA-SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data. Mobile sources were predicted to contribute 30-40 % of the OA in southern California (half of which was SOA), making mobile sources the single largest source contributor to OA in southern California. The remainder of the OA was attributed to non-mobile anthropogenic sources (e.g., cooking, biomass burning) with biogenic sources contributing to less than 5 % to the total OA. Gasoline sources were predicted to contribute about 13 times more OA than diesel sources; this difference was driven by differences in SOA production. Model predictions highlighted the need to better constrain multi-generational oxidation reactions in chemical transport models.

  16. The MSFC Solar Activity Future Estimation (MSAFE) Model

    NASA Technical Reports Server (NTRS)

    Suggs, Ron

    2017-01-01

    The MSAFE model provides forecasts for the solar indices SSN, F10.7, and Ap. These solar indices are used as inputs to space environment models used in orbital spacecraft operations and space mission analysis. Forecasts from the MSAFE model are provided on the MSFC Natural Environments Branch's solar web page and are updated as new monthly observations become available. The MSAFE prediction routine employs a statistical technique that calculates deviations of past solar cycles from the mean cycle and performs a regression analysis to calculate the deviation from the mean cycle of the solar index at the next future time interval. The forecasts are initiated for a given cycle after about 8 to 9 monthly observations from the start of the cycle are collected. A forecast made at the beginning of cycle 24 using the MSAFE program captured the cycle fairly well with some difficulty in discerning the double peak that occurred at solar cycle maximum.

  17. Application of enhanced modern structured analysis techniques to Space Station Freedom electric power system requirements

    NASA Technical Reports Server (NTRS)

    Biernacki, John; Juhasz, John; Sadler, Gerald

    1991-01-01

    A team of Space Station Freedom (SSF) system engineers are in the process of extensive analysis of the SSF requirements, particularly those pertaining to the electrical power system (EPS). The objective of this analysis is the development of a comprehensive, computer-based requirements model, using an enhanced modern structured analysis methodology (EMSA). Such a model provides a detailed and consistent representation of the system's requirements. The process outlined in the EMSA methodology is unique in that it allows the graphical modeling of real-time system state transitions, as well as functional requirements and data relationships, to be implemented using modern computer-based tools. These tools permit flexible updating and continuous maintenance of the models. Initial findings resulting from the application of EMSA to the EPS have benefited the space station program by linking requirements to design, providing traceability of requirements, identifying discrepancies, and fostering an understanding of the EPS.

  18. The Evolution and Discharge of Electric Fields within a Thunderstorm

    NASA Astrophysics Data System (ADS)

    Hager, William W.; Nisbet, John S.; Kasha, John R.

    1989-05-01

    A 3-dimensional electrical model for a thunderstorm is developed and finite difference approximations to the model are analyzed. If the spatial derivatives are approximated by a method akin to the ☐ scheme and if the temporal derivative is approximated by either a backward difference or the Crank-Nicholson scheme, we show that the resulting discretization is unconditionally stable. The forward difference approximation to the time derivative is stable when the time step is sufficiently small relative to the ratio between the permittivity and the conductivity. Max-norm error estimates for the discrete approximations are established. To handle the propagation of lightning, special numerical techniques are devised based on the Inverse Matrix Modification Formula and Cholesky updates. Numerical comparisons between the model and theoretical results of Wilson and Holzer-Saxon are presented. We also apply our model to a storm observed at the Kennedy Space Center on July 11, 1978.

  19. A Bayesian sequential processor approach to spectroscopic portal system decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sale, K; Candy, J; Breitfeller, E

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less

  20. MATLAB algorithm to implement soil water data assimilation with the Ensemble Kalman Filter using HYDRUS.

    PubMed

    Valdes-Abellan, Javier; Pachepsky, Yakov; Martinez, Gonzalo

    2018-01-01

    Data assimilation is becoming a promising technique in hydrologic modelling to update not only model states but also to infer model parameters, specifically to infer soil hydraulic properties in Richard-equation-based soil water models. The Ensemble Kalman Filter method is one of the most widely employed method among the different data assimilation alternatives. In this study the complete Matlab© code used to study soil data assimilation efficiency under different soil and climatic conditions is shown. The code shows the method how data assimilation through EnKF was implemented. Richards equation was solved by the used of Hydrus-1D software which was run from Matlab. •MATLAB routines are released to be used/modified without restrictions for other researchers•Data assimilation Ensemble Kalman Filter method code.•Soil water Richard equation flow solved by Hydrus-1D.

  1. A review of active learning approaches to experimental design for uncovering biological networks

    PubMed Central

    2017-01-01

    Various types of biological knowledge describe networks of interactions among elementary entities. For example, transcriptional regulatory networks consist of interactions among proteins and genes. Current knowledge about the exact structure of such networks is highly incomplete, and laboratory experiments that manipulate the entities involved are conducted to test hypotheses about these networks. In recent years, various automated approaches to experiment selection have been proposed. Many of these approaches can be characterized as active machine learning algorithms. Active learning is an iterative process in which a model is learned from data, hypotheses are generated from the model to propose informative experiments, and the experiments yield new data that is used to update the model. This review describes the various models, experiment selection strategies, validation techniques, and successful applications described in the literature; highlights common themes and notable distinctions among methods; and identifies likely directions of future research and open problems in the area. PMID:28570593

  2. Minnesota's forest statistics, 1987: an inventory update.

    Treesearch

    Jerold T. Hahn; W. Brad Smith

    1987-01-01

    The Minnesota 1987 inventory update, derived by using tree growth models, reports 13.5 million acres of timberland, a decline of less than 1% since 1977. This bulletin presents findings from the inventory update in tables detailing timer land area, volume, and biomass.

  3. Knowledge-based segmentation and feature analysis of hand and wrist radiographs

    NASA Astrophysics Data System (ADS)

    Efford, Nicholas D.

    1993-07-01

    The segmentation of hand and wrist radiographs for applications such as skeletal maturity assessment is best achieved by model-driven approaches incorporating anatomical knowledge. The reasons for this are discussed, and a particular frame-based or 'blackboard' strategy for the simultaneous segmentation of the hand and estimation of bone age via the TW2 method is described. The new approach is structured for optimum robustness and computational efficiency: features of interest are detected and analyzes in order of their size and prominence in the image, the largest and most distinctive being dealt with first, and the evidence generated by feature analysis is used to update a model of hand anatomy and hence guide later stages of the segmentation. Closed bone boundaries are formed by a hybrid technique combining knowledge-based, one-dimensional edge detection with model-assisted heuristic tree searching.

  4. Particle Filtering Methods for Incorporating Intelligence Updates

    DTIC Science & Technology

    2017-03-01

    methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non

  5. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    EPA Science Inventory

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  6. 78 FR 33726 - Approval and Promulgation of Implementation Plans; Kentucky: Kentucky Portion of Cincinnati...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    ... projections models, as well as changes to future vehicle mix assumptions, that influence the emission... methodology that may occur in the future such as updated socioeconomic data, new models, and other factors... updated mobile emissions model, the Motor Vehicle Emissions Simulator (also known as MOVES2010a), and to...

  7. Separate encoding of model-based and model-free valuations in the human brain.

    PubMed

    Beierholm, Ulrik R; Anen, Cedric; Quartz, Steven; Bossaerts, Peter

    2011-10-01

    Behavioral studies have long shown that humans solve problems in two ways, one intuitive and fast (System 1, model-free), and the other reflective and slow (System 2, model-based). The neurobiological basis of dual process problem solving remains unknown due to challenges of separating activation in concurrent systems. We present a novel neuroeconomic task that predicts distinct subjective valuation and updating signals corresponding to these two systems. We found two concurrent value signals in human prefrontal cortex: a System 1 model-free reinforcement signal and a System 2 model-based Bayesian signal. We also found a System 1 updating signal in striatal areas and a System 2 updating signal in lateral prefrontal cortex. Further, signals in prefrontal cortex preceded choices that are optimal according to either updating principle, while signals in anterior cingulate cortex and globus pallidus preceded deviations from optimal choice for reinforcement learning. These deviations tended to occur when uncertainty regarding optimal values was highest, suggesting that disagreement between dual systems is mediated by uncertainty rather than conflict, confirming recent theoretical proposals. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Content-Aware DataGuide with Incremental Index Update using Frequently Used Paths

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.; Duhan, Neelam; Khattar, Priyanka

    2010-11-01

    Size of the WWW is increasing day by day. Due to the absence of structured data on the Web, it becomes very difficult for information retrieval tools to fully utilize the Web information. As a solution to this problem, XML pages come into play, which provide structural information to the users to some extent. Without efficient indexes, query processing can be quite inefficient due to an exhaustive traversal on XML data. In this paper an improved content-centric approach of Content-Aware DataGuide, which is an indexing technique for XML databases, is being proposed that uses frequently used paths from historical query logs to improve query performance. The index can be updated incrementally according to the changes in query workload and thus, the overhead of reconstruction can be minimized. Frequently used paths are extracted using any Sequential Pattern mining algorithm on subsequent queries in the query workload. After this, the data structures are incrementally updated. This indexing technique proves to be efficient as partial matching queries can be executed efficiently and users can now get the more relevant documents in results.

  9. Summary Analysis: Hanford Site Composite Analysis Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, W. E.; Lehman, L. L.

    2017-06-05

    The Hanford Site’s currently maintained Composite Analysis, originally completed in 1998, requires an update. A previous update effort was undertaken by the U.S. Department of Energy (DOE) in 2001-2005, but was ended before completion to allow the Tank Closure & Waste Management Environmental Impact Statement (TC&WM EIS) (DOE/EIS-0391) to be prepared without potential for conflicting sitewide models. This EIS was issued in 2012, and the deferral was ended with guidance in memorandum “Modeling to Support Regulatory Decision Making at Hanford” (Williams, 2012) provided with the aim of ensuring subsequent modeling is consistent with the EIS.

  10. Efficient Storage Scheme of Covariance Matrix during Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Mao, D.; Yeh, T. J.

    2013-12-01

    During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.

  11. Recommendations of diagnosis and treatment of pleural effusion. Update.

    PubMed

    Villena Garrido, Victoria; Cases Viedma, Enrique; Fernández Villar, Alberto; de Pablo Gafas, Alicia; Pérez Rodríguez, Esteban; Porcel Pérez, José Manuel; Rodríguez Panadero, Francisco; Ruiz Martínez, Carlos; Salvatierra Velázquez, Angel; Valdés Cuadrado, Luis

    2014-06-01

    Although during the last few years there have been several important changes in the diagnostic or therapeutic methods, pleural effusion is still one of the diseases that the respiratory specialist have to evaluate frequently. The aim of this paper is to update the knowledge about pleural effusions, rather than to review the causes of pleural diseases exhaustively. These recommendations have a longer extension for the subjects with a direct clinical usefulness, but a slight update of other pleural diseases has been also included. Among the main scientific advantages are included the thoracic ultrasonography, the intrapleural fibrinolytics, the pleurodesis agents, or the new pleural drainages techniques. Copyright © 2013 SEPAR. Published by Elsevier Espana. All rights reserved.

  12. 1999 update of the Arizona highway cost allocation study

    DOT National Transportation Integrated Search

    1999-08-01

    The purpose of this report was to update the Arizona highway cost allocation study and to evaluate the alternative of using the new FHWA cost allocation model as a replacement The update revealed that the repeal of Arizona's weight-distance tas has l...

  13. Aircraft engine sensor fault diagnostics using an on-line OBEM update method.

    PubMed

    Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye

    2017-01-01

    This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault.

  14. Aircraft engine sensor fault diagnostics using an on-line OBEM update method

    PubMed Central

    Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye

    2017-01-01

    This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault. PMID:28182692

  15. Forecasting the (un)productivity of the 2014 M 6.0 South Napa aftershock sequence

    USGS Publications Warehouse

    Llenos, Andrea L.; Michael, Andrew J.

    2017-01-01

    The 24 August 2014 Mw 6.0 South Napa mainshock produced fewer aftershocks than expected for a California earthquake of its magnitude. In the first 4.5 days, only 59 M≥1.8 aftershocks occurred, the largest of which was an M 3.9 that happened a little over two days after the mainshock. We investigate the aftershock productivity of the South Napa sequence and compare it with other M≥5.5 California strike‐slip mainshock–aftershock sequences. While the productivity of the South Napa sequence is among the lowest, northern California mainshocks generally have fewer aftershocks than mainshocks further south, although the productivities vary widely in both regions. An epidemic‐type aftershock sequence (ETAS) model (Ogata, 1988) fit to Napa seismicity from 1980 to 23 August 2014 fits the sequence well and suggests that low‐productivity sequences are typical of this area. Utilizing regional variations in productivity could improve operational earthquake forecasting (OEF) by improving the model used immediately after the mainshock. We show this by comparing the daily rate of M≥2 aftershocks to forecasts made with the generic California model (Reasenberg and Jones, 1989; hereafter, RJ89), RJ89 models with productivity updated daily, a generic California ETAS model, an ETAS model based on premainshock seismicity, and ETAS models updated daily following the mainshock. RJ89 models for which only the productivity is updated provide better forecasts than the generic RJ89 California model, and the Napa‐specific ETAS models forecast the aftershock rates more accurately than either generic model. Therefore, forecasts that use localized initial parameters and that rapidly update the productivity may be better for OEF than using a generic model and/or updating all parameters.

  16. Diversity in plant hydraulic traits explains seasonal and inter-annual variations of vegetation dynamics in seasonally dry tropical forests.

    PubMed

    Xu, Xiangtao; Medvigy, David; Powers, Jennifer S; Becknell, Justin M; Guan, Kaiyu

    2016-10-01

    We assessed whether diversity in plant hydraulic traits can explain the observed diversity in plant responses to water stress in seasonally dry tropical forests (SDTFs). The Ecosystem Demography model 2 (ED2) was updated with a trait-driven mechanistic plant hydraulic module, as well as novel drought-phenology and plant water stress schemes. Four plant functional types were parameterized on the basis of meta-analysis of plant hydraulic traits. Simulations from both the original and the updated ED2 were evaluated against 5 yr of field data from a Costa Rican SDTF site and remote-sensing data over Central America. The updated model generated realistic plant hydraulic dynamics, such as leaf water potential and stem sap flow. Compared with the original ED2, predictions from our novel trait-driven model matched better with observed growth, phenology and their variations among functional groups. Most notably, the original ED2 produced unrealistically small leaf area index (LAI) and underestimated cumulative leaf litter. Both of these biases were corrected by the updated model. The updated model was also better able to simulate spatial patterns of LAI dynamics in Central America. Plant hydraulic traits are intercorrelated in SDTFs. Mechanistic incorporation of plant hydraulic traits is necessary for the simulation of spatiotemporal patterns of vegetation dynamics in SDTFs in vegetation models. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  17. Fast model updating coupling Bayesian inference and PGD model reduction

    NASA Astrophysics Data System (ADS)

    Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic

    2018-04-01

    The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.

  18. Uncertainty in training image-based inversion of hydraulic head data constrained to ERT data: Workflow and case study

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Caers, Jef

    2015-07-01

    In inverse problems, investigating uncertainty in the posterior distribution of model parameters is as important as matching data. In recent years, most efforts have focused on techniques to sample the posterior distribution with reasonable computational costs. Within a Bayesian context, this posterior depends on the prior distribution. However, most of the studies ignore modeling the prior with realistic geological uncertainty. In this paper, we propose a workflow inspired by a Popper-Bayes philosophy that data should first be used to falsify models, then only be considered for matching. We propose a workflow consisting of three steps: (1) in defining the prior, we interpret multiple alternative geological scenarios from literature (architecture of facies) and site-specific data (proportions of facies). Prior spatial uncertainty is modeled using multiple-point geostatistics, where each scenario is defined using a training image. (2) We validate these prior geological scenarios by simulating electrical resistivity tomography (ERT) data on realizations of each scenario and comparing them to field ERT in a lower dimensional space. In this second step, the idea is to probabilistically falsify scenarios with ERT, meaning that scenarios which are incompatible receive an updated probability of zero while compatible scenarios receive a nonzero updated belief. (3) We constrain the hydrogeological model with hydraulic head and ERT using a stochastic search method. The workflow is applied to a synthetic and a field case studies in an alluvial aquifer. This study highlights the importance of considering and estimating prior uncertainty (without data) through a process of probabilistic falsification.

  19. Guest Editor's introduction: Special issue on distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Lea, Rodger

    1998-09-01

    Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com

  20. Captions, Consistency, Creativity, and the Consensual Assessment Technique: New Evidence of Reliability

    ERIC Educational Resources Information Center

    Kaufman, James C.; Lee, Joohyun; Baer, John; Lee, Soonmook

    2007-01-01

    The consensual assessment technique (CAT) is a measurement tool for creativity research in which appropriate experts evaluate creative products [Amabile, T. M. (1996). "Creativity in context: Update to the social psychology of creativity." Boulder, CO: Westview]. However, the CAT is hampered by the time-consuming nature of the products (asking…

  1. A Meta-Analysis of the Effectiveness of Alternative Assessment Techniques

    ERIC Educational Resources Information Center

    Gozuyesil, Eda; Tanriseven, Isil

    2017-01-01

    Purpose: Recent trends have encouraged the use of alternative assessment tools in class in line with the recommendations made by the updated curricula. It is of great importance to understand how alternative assessment affects students' academic outcomes and which techniques are most effective in which contexts. This study aims to examine the…

  2. Update on rehabilitation in multiple sclerosis.

    PubMed

    Donzé, Cécile

    2015-04-01

    Given that mobility impairment is a hallmark of multiple sclerosis, people with this disease are likely to benefit from rehabilitation therapy throughout the course of their illness. The review provides an update on rehabilitation focused on balance and walking impairment. Classical rehabilitation focusing on muscle rehabilitation, neurotherapeutic facilitation is effective and recommended. Other techniques did not prove their superiority: transcutaneal neurostimulation, repetitive magnetic stimulation, electromagnetic therapy, whole body vibration and robot-assisted gait rehabilitation and need more studies to conclude. Cooling therapy, hydrotherapy, orthoses and textured insoles could represent a complementary service to other techniques in specific conditions. Multidisciplinary rehabilitation program provides positive effects and high satisfaction for patients with multiple sclerosis but needs more evaluation. New technologies using serious game and telerehabilitation seem to be an interesting technique to promote physical activity, self-management and quality of life. Rehabilitation like other therapy needs regular clinical evaluation to adapt the program and propose appropriate techniques. Moreover, the objective of rehabilitation needs to be decided with the patient with realistic expectation. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  3. Digital education and dynamic assessment of tongue diagnosis based on Mashup technique.

    PubMed

    Tsai, Chin-Chuan; Lo, Yen-Cheng; Chiang, John Y; Sainbuyan, Natsagdorj

    2017-01-24

    To assess the digital education and dynamic assessment of tongue diagnosis based on Mashup technique (DEDATD) according to specifific user's answering pattern, and provide pertinent information tailored to user's specifific needs supplemented by the teaching materials constantly updated through the Mashup technique. Fifty-four undergraduate students were tested with DEDATD developed. The effificacy of the DEDATD was evaluated based on the pre- and post-test performance, with interleaving training sessions targeting on the weakness of the student under test. The t-test demonstrated that signifificant difference was reached in scores gained during pre- and post-test sessions, and positive correlation between scores gained and length of time spent on learning, while no signifificant differences between the gender and post-test score, and the years of students in school and the progress in score gained. DEDATD, coupled with Mashup technique, could provide updated materials fifiltered through diverse sources located across the network. The dynamic assessment could tailor each individual learner's needs to offer custom-made learning materials. DEDATD poses as a great improvement over the traditional teaching methods.

  4. Key on demand (KoD) for software-defined optical networks secured by quantum key distribution (QKD).

    PubMed

    Cao, Yuan; Zhao, Yongli; Colman-Meixner, Carlos; Yu, Xiaosong; Zhang, Jie

    2017-10-30

    Software-defined optical networking (SDON) will become the next generation optical network architecture. However, the optical layer and control layer of SDON are vulnerable to cyberattacks. While, data encryption is an effective method to minimize the negative effects of cyberattacks, secure key interchange is its major challenge which can be addressed by the quantum key distribution (QKD) technique. Hence, in this paper we discuss the integration of QKD with WDM optical networks to secure the SDON architecture by introducing a novel key on demand (KoD) scheme which is enabled by a novel routing, wavelength and key assignment (RWKA) algorithm. The QKD over SDON with KoD model follows two steps to provide security: i) quantum key pools (QKPs) construction for securing the control channels (CChs) and data channels (DChs); ii) the KoD scheme uses RWKA algorithm to allocate and update secret keys for different security requirements. To test our model, we define a security probability index which measures the security gain in CChs and DChs. Simulation results indicate that the security performance of CChs and DChs can be enhanced by provisioning sufficient secret keys in QKPs and performing key-updating considering potential cyberattacks. Also, KoD is beneficial to achieve a positive balance between security requirements and key resource usage.

  5. On the performance of updating Stochastic Dynamic Programming policy using Ensemble Streamflow Prediction in a snow-covered region

    NASA Astrophysics Data System (ADS)

    Martin, A.; Pascal, C.; Leconte, R.

    2014-12-01

    Stochastic Dynamic Programming (SDP) is known to be an effective technique to find the optimal operating policy of hydropower systems. In order to improve the performance of SDP, this project evaluates the impact of re-updating the policy at every time step by using Ensemble Streamflow Prediction (ESP). We present a case study of the Kemano's hydropower system on the Nechako River in British Columbia, Canada. Managed by Rio Tinto Alcan (RTA), this system is subject to large streamflow volumes in spring due to important amount of snow depth during the winter season. Therefore, the operating policy should not only maximize production but also minimize the risk of flooding. The hydrological behavior of the system is simulated with CEQUEAU, a distributed and deterministic hydrological model developed by the Institut national de la recherche scientifique - Eau, Terre et Environnement (INRS-ETE) in Quebec, Canada. On each decision time step, CEQUEAU is used to generate ESP scenarios based on historical meteorological sequences and the current state of the hydrological model. These scenarios are used into the SDP to optimize the new release policy for the next time steps. This routine is then repeated over the entire simulation period. Results are compared with those obtained by using SDP on historical inflow scenarios.

  6. Updating Known Distribution Models for Forecasting Climate Change Impact on Endangered Species

    PubMed Central

    Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo

    2013-01-01

    To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only. PMID:23840330

  7. Updating known distribution models for forecasting climate change impact on endangered species.

    PubMed

    Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo

    2013-01-01

    To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.

  8. An adaptive ARX model to estimate the RUL of aluminum plates based on its crack growth

    NASA Astrophysics Data System (ADS)

    Barraza-Barraza, Diana; Tercero-Gómez, Víctor G.; Beruvides, Mario G.; Limón-Robles, Jorge

    2017-01-01

    A wide variety of Condition-Based Maintenance (CBM) techniques deal with the problem of predicting the time for an asset fault. Most statistical approaches rely on historical failure data that might not be available in several practical situations. To address this issue, practitioners might require the use of self-starting approaches that consider only the available knowledge about the current degradation process and the asset operating context to update the prognostic model. Some authors use Autoregressive (AR) models for this purpose that are adequate when the asset operating context is constant, however, if it is variable, the accuracy of the models can be affected. In this paper, three autoregressive models with exogenous variables (ARX) were constructed, and their capability to estimate the remaining useful life (RUL) of a process was evaluated following the case of the aluminum crack growth problem. An existing stochastic model of aluminum crack growth was implemented and used to assess RUL estimation performance of the proposed ARX models through extensive Monte Carlo simulations. Point and interval estimations were made based only on individual history, behavior, operating conditions and failure thresholds. Both analytic and bootstrapping techniques were used in the estimation process. Finally, by including recursive parameter estimation and a forgetting factor, the ARX methodology adapts to changing operating conditions and maintain the focus on the current degradation level of an asset.

  9. Experimental validation of a numerical 3-D finite model applied to wind turbines design under vibration constraints: TREVISE platform

    NASA Astrophysics Data System (ADS)

    Sellami, Takwa; Jelassi, Sana; Darcherif, Abdel Moumen; Berriri, Hanen; Mimouni, Med Faouzi

    2018-04-01

    With the advancement of wind turbines towards complex structures, the requirement of trusty structural models has become more apparent. Hence, the vibration characteristics of the wind turbine components, like the blades and the tower, have to be extracted under vibration constraints. Although extracting the modal properties of blades is a simple task, calculating precise modal data for the whole wind turbine coupled to its tower/foundation is still a perplexing task. In this framework, this paper focuses on the investigation of the structural modeling approach of modern commercial micro-turbines. Thus, the structural model a complex designed wind turbine, which is Rutland 504, is established based on both experimental and numerical methods. A three-dimensional (3-D) numerical model of the structure was set up based on the finite volume method (FVM) using the academic finite element analysis software ANSYS. To validate the created model, experimental vibration tests were carried out using the vibration test system of TREVISE platform at ECAM-EPMI. The tests were based on the experimental modal analysis (EMA) technique, which is one of the most efficient techniques for identifying structures parameters. Indeed, the poles and residues of the frequency response functions (FRF), between input and output spectra, were calculated to extract the mode shapes and the natural frequencies of the structure. Based on the obtained modal parameters, the numerical designed model was up-dated.

  10. Inter-firm Networks, Organizational Learning and Knowledge Updating: An Empirical Study

    NASA Astrophysics Data System (ADS)

    Zhang, Su-rong; Wang, Wen-ping

    In the era of knowledge-based economy which information technology develops rapidly, the rate of knowledge updating has become a critical factor for enterprises to gaining competitive advantage .We build an interactional theoretical model among inter-firm networks, organizational learning and knowledge updating thereby and demonstrate it with empirical study at last. The result shows that inter-firm networks and organizational learning is the source of knowledge updating.

  11. The Cancer Family Caregiving Experience: An Updated and Expanded Conceptual Model

    PubMed Central

    Fletcher, Barbara Swore; Miaskowski, Christine; Given, Barbara; Schumacher, Karen

    2011-01-01

    Objective The decade from 2000–2010 was an era of tremendous growth in family caregiving research specific to the cancer population. This research has implications for how cancer family caregiving is conceptualized, yet the most recent comprehensive model of cancer family caregiving was published ten years ago. Our objective was to develop an updated and expanded comprehensive model of the cancer family caregiving experience, derived from concepts and variables used in research during past ten years. Methods A conceptual model was developed based on cancer family caregiving research published from 2000–2010. Results Our updated and expanded model has three main elements: 1) the stress process, 2) contextual factors, and 3) the cancer trajectory. Emerging ways of conceptualizing the relationships between and within model elements are addressed, as well as an emerging focus on caregiver-patient dyads as the unit of analysis. Conclusions Cancer family caregiving research has grown dramatically since 2000 resulting in a greatly expanded conceptual landscape. This updated and expanded model of the cancer family caregiving experience synthesizes the conceptual implications of an international body of work and demonstrates tremendous progress in how cancer family caregiving research is conceptualized. PMID:22000812

  12. Comparing the impact of time displaced and biased precipitation estimates for online updated urban runoff models.

    PubMed

    Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen

    2013-01-01

    When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.

  13. Super-nodal methods for space-time kinetics

    NASA Astrophysics Data System (ADS)

    Mertyurek, Ugur

    The purpose of this research has been to develop an advanced Super-Nodal method to reduce the run time of 3-D core neutronics models, such as in the NESTLE reactor core simulator and FORMOSA nuclear fuel management optimization codes. Computational performance of the neutronics model is increased by reducing the number of spatial nodes used in the core modeling. However, as the number of spatial nodes decreases, the error in the solution increases. The Super-Nodal method reduces the error associated with the use of coarse nodes in the analyses by providing a new set of cross sections and ADFs (Assembly Discontinuity Factors) for the new nodalization. These so called homogenization parameters are obtained by employing consistent collapsing technique. During this research a new type of singularity, namely "fundamental mode singularity", is addressed in the ANM (Analytical Nodal Method) solution. The "Coordinate Shifting" approach is developed as a method to address this singularity. Also, the "Buckling Shifting" approach is developed as an alternative and more accurate method to address the zero buckling singularity, which is a more common and well known singularity problem in the ANM solution. In the course of addressing the treatment of these singularities, an effort was made to provide better and more robust results from the Super-Nodal method by developing several new methods for determining the transverse leakage and collapsed diffusion coefficient, which generally are the two main approximations in the ANM methodology. Unfortunately, the proposed new transverse leakage and diffusion coefficient approximations failed to provide a consistent improvement to the current methodology. However, improvement in the Super-Nodal solution is achieved by updating the homogenization parameters at several time points during a transient. The update is achieved by employing a refinement technique similar to pin-power reconstruction. A simple error analysis based on the relative residual in the 3-D few group diffusion equation at the fine mesh level is also introduced in this work.

  14. Update on the Department of the Navy Systems Engineering Career Competency Model Acquisition Activities

    DTIC Science & Technology

    2016-04-30

    Model Acquisition Activities Clifford Whitcomb, Systems Engineering Professor, NPS Corina White, Systems Engineering Research Associate, NPS...Engineering Acquisition Activities Karen Holness, Assistant Professor, NPS Update on the Department of the Navy Systems Engineering Career Competency Model ...Career Competency Model Clifford A. Whitcomb—is a Professor in the Systems Engineering Department at the Naval Postgraduate School, in Monterey, CA

  15. An Updated AP2 Beamline TURTLE Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gormley, M.; O'Day, S.

    1991-08-23

    This note describes a TURTLE model of the AP2 beamline. This model was created by D. Johnson and improved by J. Hangst. The authors of this note have made additional improvements which reflect recent element and magnet setting changes. The magnet characteristics measurements and survey data compiled to update the model will be presented. A printout of the actual TURTLE deck may be found in appendix A.

  16. Remote software upload techniques in future vehicles and their performance analysis

    NASA Astrophysics Data System (ADS)

    Hossain, Irina

    Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles could benefit from it. However, like the unicast RSU, the security requirements of multicast communication, i.e., authenticity, confidentiality and integrity of the software transmitted and access control of the group members is challenging. In this thesis, an infrastructure-based mobile multicasting for RSU in vehicle ECUs is proposed where an ECU receives the software from a remote software distribution center using the road side BSs as gateways. The Vehicular Software Distribution Network (VSDN) is divided into small regions administered by a Regional Group Manager (RGM). Two multicast Group Key Management (GKM) techniques are proposed based on the degree of trust on the BSs named Fully-trusted (FT) and Semi-trusted (ST) systems. Analytical models are developed to find the multicast session establishment latency and handover latency for these two protocols. The average latency to perform mutual authentication of the software vendor and a vehicle, and to send the multicast session key by the software provider during multicast session initialization, and the handoff latency during multicast session is calculated. Analytical and simulation results show that the link establishment latency per vehicle of our proposed schemes is in the range of few seconds and the ST system requires few ms higher time than the FT system. The handoff latency is also in the range of few seconds and in some cases ST system requires less handoff time than the FT system. Thus, it is possible to build an efficient GKM protocol without putting too much trust on the BSs.

  17. Service delivery innovation for hospital emergency management using rich organizational modelling.

    PubMed

    Dhakal, Yogit; Bhuiyan, Moshiur; Prasad, Pwc; Krishna, Aneesh

    2018-04-01

    The purpose of this article is to identify and assess service delivery issues within a hospital emergency department and propose an improved model to address them. Possible solutions and options to these issues are explored to determine the one that best fits the context. In this article, we have analysed the emergency department's organizational models through i* strategic dependency and rational modelling technique before proposing updated models that could potentially drive business process efficiencies. The results produced by the models, framework and improved patient journey in the emergency department were evaluated against the statistical data revealed from a reputed government organization related to health, to ensure that the key elements of the issues such as wait time, stay time/throughput, workload and human resource are resolved. The result of the evaluation was taken as a basis to determine the success of the project. Based on these results, the article recommends implementing the concept on actual scenario, where a positive result is achievable.

  18. Comparison of results of an obstacle resolving microscale model with wind tunnel data

    NASA Astrophysics Data System (ADS)

    Grawe, David; Schlünzen, K. Heinke; Pascheke, Frauke

    2013-11-01

    The microscale transport and stream model MITRAS has been improved and a new technique has been implemented to improve numerical stability for complex obstacle configurations. Results of the updated version have been compared with wind tunnel data using an evaluation method that has been established for simple obstacle configurations. MITRAS is a part of the M-SYS model system for the assessment of ambient air quality. A comparison of model results for the flow field against quality ensured wind tunnel data has been carried out for both idealised and realistic test cases. Results of the comparison show a very good agreement of the wind field for most test cases and identify areas of possible improvement of the model. The evaluated MITRAS results can be used as input data for the M-SYS microscale chemistry model MICTM. This paper describes how such a comparison can be carried out for simple as well as realistic obstacle configurations and what difficulties arise.

  19. 77 FR 74355 - Approval of Air Quality Implementation Plans; California; San Joaquin Valley; Attainment Plan for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-14

    ... update the air quality modeling in the San Joaquin Valley 8-Hour Ozone SIP by December 31, 2014. DATES... modeling in the San Joaquin Valley 8-Hour Ozone SIP to reflect emissions inventory improvements and any...) * * * (396) * * * (ii) * * * (A) * * * (2) * * * (ii) Commitment to update the air quality modeling in the...

  20. PRMS-IV, the precipitation-runoff modeling system, version 4

    USGS Publications Warehouse

    Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.

    2015-01-01

    Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.

  1. Agent Communication for Dynamic Belief Update

    NASA Astrophysics Data System (ADS)

    Kobayashi, Mikito; Tojo, Satoshi

    Thus far, various formalizations of rational / logical agent model have been proposed. In this paper, we include the notion of communication channel and belief modality into update logic, and introduce Belief Update Logic (BUL). First, we discuss that how we can reformalize the inform action of FIPA-ACL into communication channel, which represents a connection between agents. Thus, our agents can send a message only when they believe, and also there actually is, a channel between him / her and a receiver. Then, we present a static belief logic (BL) and show its soundness and completeness. Next, we develop the logic to BUL, which can update Kripke model by the inform action; in which we show that in the updated model the belief operator also satisfies K45. Thereafter, we show that every sentence in BUL can be translated into BL; thus, we can contend that BUL is also sound and complete. Furthermore, we discuss the features of CUL, including the case of inconsistent information, as well as channel transmission. Finally, we summarize our contribution and discuss some future issues.

  2. Regional ionospheric TEC data assimilation and now-casting service

    NASA Astrophysics Data System (ADS)

    Aa, E.; Liu, S.; Wengeng, H.

    2017-12-01

    Ionospheric data assimilation is a now-casting technique to incorporate irregular ionospheric measurements into certain background model, which is an effective and efficient way to overcome the limitation of the unbalanced data distribution and to improve the accuracy of the model, so that the model and the data can be optimally combined with each other to produce a more reliable and reasonable system specification. In this study, a regional total electron content (TEC) now-casting system over China and adjacent areas (70E-140E and 15N-55N) is developed on the basis of data assimilation technique. The International Reference Ionosphere (IRI) is used here as background model, and the GNSS data are derived from both the Space Environment Monitoring Network of Chinese Academy of Sciences (SEMnet) and International GNSS Service (IGS) data. A Three-dimensional variation algorithm (3DVAR) combined with Gauss-Markov Kalman filter technique is used to implement the data assimilation. The regional gridded TEC maps and the position errors of single-frequency GPS receivers can be generated and publicized online (http://sepc.ac.cn/TEC_chn.php) in quasi-real time, which is updated for every 15 min. It is one of the ionospheric now-casting systems in China based on data assimilation algorithm, which can be used not only for real-time monitoring of ionosphere environment over China and adjacent areas, but also in providing accurate and effective specification of regional ionospheric TEC and error correction for satellite navigation, radar imaging, shortwave communication, and other relevant applications.

  3. Cloud Feedback in Atmospheric General Circulation Models: An Update

    NASA Technical Reports Server (NTRS)

    Cess, R. D.; Zhang, M. H.; Ingram, W. J.; Potter, G. L.; Alekseev, V.; Barker, H. W.; Cohen-Solal, E.; Colman, R. A.; Dazlich, D. A.; DelGenio, A. D.; hide

    1996-01-01

    Six years ago, we compared the climate sensitivity of 19 atmospheric general circulation models and found a roughly threefold variation among the models; most of this variation was attributed to differences in the models' depictions of cloud feedback. In an update of this comparison, current models showed considerably smaller differences in net cloud feedback, with most producing modest values. There are, however, substantial differences in the feedback components, indicating that the models still have physical disagreements.

  4. Abdomen and spinal cord segmentation with augmented active shape models.

    PubMed

    Xu, Zhoubing; Conrad, Benjamin N; Baucom, Rebeccah B; Smith, Seth A; Poulose, Benjamin K; Landman, Bennett A

    2016-07-01

    Active shape models (ASMs) have been widely used for extracting human anatomies in medical images given their capability for shape regularization of topology preservation. However, sensitivity to model initialization and local correspondence search often undermines their performances, especially around highly variable contexts in computed-tomography (CT) and magnetic resonance (MR) images. In this study, we propose an augmented ASM (AASM) by integrating the multiatlas label fusion (MALF) and level set (LS) techniques into the traditional ASM framework. Using AASM, landmark updates are optimized globally via a region-based LS evolution applied on the probability map generated from MALF. This augmentation effectively extends the searching range of correspondent landmarks while reducing sensitivity to the image contexts and improves the segmentation robustness. We propose the AASM framework as a two-dimensional segmentation technique targeting structures with one axis of regularity. We apply AASM approach to abdomen CT and spinal cord (SC) MR segmentation challenges. On 20 CT scans, the AASM segmentation of the whole abdominal wall enables the subcutaneous/visceral fat measurement, with high correlation to the measurement derived from manual segmentation. On 28 3T MR scans, AASM yields better performances than other state-of-the-art approaches in segmenting white/gray matter in SC.

  5. Introducing GEOPHIRES v2.0: Updated Geothermal Techno-Economic Simulation Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckers, Koenraad J; McCabe, Kevin

    This paper presents an updated version of the geothermal techno-economic simulation tool GEOPHIRES (GEOthermal energy for Production of Heat and electricity ('IR') Economically Simulated). GEOPHIRES combines engineering models of the reservoir, wellbores, and surface plant facilities of a geothermal plant with an economic model to estimate the capital and operation and maintenance costs, lifetime energy production, and overall levelized cost of energy. The available end-use options are electricity, direct-use heat, and cogeneration. The main updates in the new version include conversion of the source code from FORTRAN to Python, the option to import temperature data (e.g., measured or from stand-alonemore » reservoir simulator), updated cost correlations, and more flexibility in selecting the time step and number of injection and production wells. In this paper, we provide an overview of all the updates and two case studies to illustrate the tool's new capabilities.« less

  6. Specifications for updating USGS land use and land cover maps

    USGS Publications Warehouse

    Milazzo, Valerie A.

    1983-01-01

    To meet the increasing demands for up-to-date land use and land cover information, a primary goal of the U.S. Geological Survey's (USGS) national land use and land cover mapping program is to provide for periodic updating of maps and data in a timely and uniform manner. The technical specifications for updating existing USGS land use and land cover maps that are presented here cover both the interpretive aspects of detecting and identifying land use and land cover changes and the cartographic aspects of mapping and presenting the change data in conventional map format. They provide the map compiler with the procedures and techniques necessary to then use these change data to update existing land use and land cover maps in a manner that is both standardized and repeatable. Included are specifications for the acquisition of remotely sensed source materials, selection of compilation map bases, handling of data base corrections, editing and quality control operations, generation of map update products for USGS open file, and the reproduction and distribution of open file materials. These specifications are planned to become part of the National Mapping Division's Technical Instructions.

  7. A 30-day forecast experiment with the GISS model and updated sea surface temperatures

    NASA Technical Reports Server (NTRS)

    Spar, J.; Atlas, R.; Kuo, E.

    1975-01-01

    The GISS model was used to compute two parallel global 30-day forecasts for the month January 1974. In one forecast, climatological January sea surface temperatures were used, while in the other observed sea temperatures were inserted and updated daily. A comparison of the two forecasts indicated no clear-cut beneficial effect of daily updating of sea surface temperatures. Despite the rapid decay of daily predictability, the model produced a 30-day mean forecast for January 1974 that was generally superior to persistence and climatology when evaluated over either the globe or the Northern Hemisphere, but not over smaller regions.

  8. Scaling Techniques for Combustion Device Random Vibration Predictions

    NASA Technical Reports Server (NTRS)

    Kenny, R. J.; Ferebee, R. C.; Duvall, L. D.

    2016-01-01

    This work presents compares scaling techniques that can be used for prediction of combustion device component random vibration levels with excitation due to the internal combustion dynamics. Acceleration and unsteady dynamic pressure data from multiple component test programs are compared and normalized per the two scaling approaches reviewed. Two scaling technique are reviewed and compared against the collected component test data. The first technique is an existing approach developed by Barrett, and the second technique is an updated approach new to this work. Results from utilizing both techniques are presented and recommendations about future component random vibration prediction approaches are given.

  9. Data update in a land information network

    NASA Astrophysics Data System (ADS)

    Mullin, Robin C.

    1988-01-01

    The on-going update of data exchanged in a land information network is examined. In the past, major developments have been undertaken to enable the exchange of data between land information systems. A model of a land information network and the data update process have been developed. Based on these, a functional description of the database and software to perform data updating is presented. A prototype of the data update process was implemented using the ARC/INFO geographic information system. This was used to test four approaches to data updating, i.e., bulk, block, incremental, and alert updates. A bulk update is performed by replacing a complete file with an updated file. A block update requires that the data set be partitioned into blocks. When an update occurs, only the blocks which are affected need to be transferred. An incremental update approach records each feature which is added or deleted and transmits only the features needed to update the copy of the file. An alert is a marker indicating that an update has occurred. It can be placed in a file to warn a user that if he is active in an area containing markers, updated data is available. The four approaches have been tested using a cadastral data set.

  10. Online Updating of Statistical Inference in the Big Data Setting.

    PubMed

    Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui

    2016-01-01

    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.

  11. Real-time updating of the flood frequency distribution through data assimilation

    NASA Astrophysics Data System (ADS)

    Aguilar, Cristina; Montanari, Alberto; Polo, María-José

    2017-07-01

    We explore the memory properties of catchments for predicting the likelihood of floods based on observations of average flows in pre-flood seasons. Our approach assumes that flood formation is driven by the superimposition of short- and long-term perturbations. The former is given by the short-term meteorological forcing leading to infiltration and/or saturation excess, while the latter is originated by higher-than-usual storage in the catchment. To exploit the above sensitivity to long-term perturbations, a meta-Gaussian model and a data assimilation approach are implemented for updating the flood frequency distribution a season in advance. Accordingly, the peak flow in the flood season is predicted in probabilistic terms by exploiting its dependence on the average flow in the antecedent seasons. We focus on the Po River at Pontelagoscuro and the Danube River at Bratislava. We found that the shape of the flood frequency distribution is noticeably impacted by higher-than-usual flows occurring up to several months earlier. The proposed technique may allow one to reduce the uncertainty associated with the estimation of flood frequency.

  12. Gradient optimization of finite projected entangled pair states

    NASA Astrophysics Data System (ADS)

    Liu, Wen-Yuan; Dong, Shao-Jun; Han, Yong-Jian; Guo, Guang-Can; He, Lixin

    2017-05-01

    Projected entangled pair states (PEPS) methods have been proven to be powerful tools to solve strongly correlated quantum many-body problems in two dimensions. However, due to the high computational scaling with the virtual bond dimension D , in a practical application, PEPS are often limited to rather small bond dimensions, which may not be large enough for some highly entangled systems, for instance, frustrated systems. Optimization of the ground state using the imaginary time evolution method with a simple update scheme may go to a larger bond dimension. However, the accuracy of the rough approximation to the environment of the local tensors is questionable. Here, we demonstrate that by combining the imaginary time evolution method with a simple update, Monte Carlo sampling techniques and gradient optimization will offer an efficient method to calculate the PEPS ground state. By taking advantage of massive parallel computing, we can study quantum systems with larger bond dimensions up to D =10 without resorting to any symmetry. Benchmark tests of the method on the J1-J2 model give impressive accuracy compared with exact results.

  13. Dientamoeba fragilis, the Neglected Trichomonad of the Human Bowel

    PubMed Central

    Barratt, Joel; Chan, Douglas; Ellis, John T.

    2016-01-01

    SUMMARY Dientamoeba fragilis is a protozoan parasite of the human bowel, commonly reported throughout the world in association with gastrointestinal symptoms. Despite its initial discovery over 100 years ago, arguably, we know less about this peculiar organism than any other pathogenic or potentially pathogenic protozoan that infects humans. The details of its life cycle and mode of transmission are not completely known, and its potential as a human pathogen is debated within the scientific community. Recently, several major advances have been made with respect to this organism's life cycle and molecular biology. While many questions remain unanswered, these and other recent advances have given rise to some intriguing new leads, which will pave the way for future research. This review encompasses a large body of knowledge generated on various aspects of D. fragilis over the last century, together with an update on the most recent developments. This includes an update on the latest diagnostic techniques and treatments, the clinical aspects of dientamoebiasis, the development of an animal model, the description of a D. fragilis cyst stage, and the sequencing of the first D. fragilis transcriptome. PMID:27170141

  14. NOAA's National Air Quality Prediction and Development of Aerosol and Atmospheric Composition Prediction Components for NGGPS

    NASA Astrophysics Data System (ADS)

    Stajner, I.; McQueen, J.; Lee, P.; Stein, A. F.; Wilczak, J. M.; Upadhayay, S.; daSilva, A.; Lu, C. H.; Grell, G. A.; Pierce, R. B.

    2017-12-01

    NOAA's operational air quality predictions of ozone, fine particulate matter (PM2.5) and wildfire smoke over the United States and airborne dust over the contiguous 48 states are distributed at http://airquality.weather.gov. The National Air Quality Forecast Capability (NAQFC) providing these predictions was updated in June 2017. Ozone and PM2.5 predictions are now produced using the system linking the Community Multiscale Air Quality model (CMAQ) version 5.0.2 with meteorological inputs from the North American Mesoscale Forecast System (NAM) version 4. Predictions of PM2.5 include intermittent dust emissions and wildfire emissions from an updated version of BlueSky system. For the latter, the CMAQ system is initialized by rerunning it over the previous 24 hours to include wildfire emissions at the time when they were observed from the satellites. Post processing to reduce the bias in PM2.5 prediction was updated using the Kalman filter analog (KFAN) technique. Dust related aerosol species at the CMAQ domain lateral boundaries now come from the NEMS Global Aerosol Component (NGAC) v2 predictions. Further development of NAQFC includes testing of CMAQ predictions to 72 hours, Canadian fire emissions data from Environment and Climate Change Canada (ECCC) and the KFAN technique to reduce bias in ozone predictions. NOAA is developing the Next Generation Global Predictions System (NGGPS) with an aerosol and gaseous atmospheric composition component to improve and integrate aerosol and ozone predictions and evaluate their impacts on physics, data assimilation and weather prediction. Efforts are underway to improve cloud microphysics, investigate aerosol effects and include representations of atmospheric composition of varying complexity into NGGPS: from the operational ozone parameterization, GOCART aerosols, with simplified ozone chemistry, to CMAQ chemistry with aerosol modules. We will present progress on community building, planning and development of NGGPS.

  15. A Meteorological Model's Dependence on Radiation Update Frequency

    NASA Technical Reports Server (NTRS)

    Eastman, Joseph L.; Peters-Lidard, Christa; Tao, Wei-Kuo; Kumar, Sujay; Tian, Yudong; Lang, Stephen E.; Zeng, Xiping

    2004-01-01

    Numerical weather models are used to simulate circulations in the atmosphere including clouds and precipitation by applying a set of mathematical equations over a three-dimensional grid. The grid is composed of discrete points at which the meteorological variables are defined. As computing power continues to rise these models are being used at finer grid spacing, but they must still cover a wide range of scales. Some of the physics that must be accounted for in the model cannot be explicitly resolved, and their effects, therefore, must be estimated or "parameterized". Some of these parameterizations are computationally expensive. To alleviate the problem, they are not always updated at the time resolution of the model with the assumption being that the impact will be small. In this study, a coupled land-atmosphere model is used to assess the impact of less frequent updates of the computationally expensive radiation physics for a case on June 6, 2002, that occurred during a field experiment over the central plains known as International H20 Project (IHOP). The model was tested using both the original conditions, which were dry, and with modified conditions wherein moisture was added to the lower part of the atmosphere to produce clouds and precipitation (i.e., a wet case). For each of the conditions (i.e., dry and wet), four set of experiments were conducted wherein the model was run for a period of 24 hours and the radiation fields (including both incoming solar and outgoing longwave) were updated every 1, 3, 10, and 100 time steps. Statistical tests indicated that average quantities of surface variables for both the dry and wet cases were the same for the various update frequencies. However, spatially the results could be quite different especially in the wet case after it began to rain. The near-surface wind field was found to be different most of the time even for the dry case. In the wet case, rain intensities and average vertical profiles of heating associated with cloudy areas were found to differ for the various radiation update frequencies. The latter implies that the mean state of the model could be different as a result of not updating the radiation fields every time step and has important implications for longer term climate studies

  16. Comprehensive T-matrix Reference Database: A 2009-2011 Update

    NASA Technical Reports Server (NTRS)

    Zakharova, Nadezhda T.; Videen, G.; Khlebtsov, Nikolai G.

    2012-01-01

    The T-matrix method is one of the most versatile and efficient theoretical techniques widely used for the computation of electromagnetic scattering by single and composite particles, discrete random media, and particles in the vicinity of an interface separating two half-spaces with different refractive indices. This paper presents an update to the comprehensive database of peer-reviewed T-matrix publications compiled by us previously and includes the publications that appeared since 2009. It also lists several earlier publications not included in the original database.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbride, Theresa L.

    This short article was prepared for the U.S. Department of Energy's Building America Update newsletter. The article identifies energy and cost-saving benefits of using advanced framing techniques in new construction identified by research teams working with the DOE's Building America program. The article also provides links to guides in the Building America Solution Center that give how-to instructions for builders who want to implement advanced framing construction. The newsletter is issued monthly and can be accessed at http://energy.gov/eere/buildings/building-america-update-newsletter

  18. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model

    EPA Science Inventory

    Sea spray aerosols (SSA) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. In this study, the Community Multiscale Air Quality (CMAQ) model is updated to enhance fine mode SSA emissions,...

  19. Impact of RACM2, halogen chemistry, and updated ozone deposition velocity onhemispheric ozone predictions

    EPA Science Inventory

    We incorporate the Regional Atmospheric Chemistry Mechanism (RACM2) into the Community Multiscale Air Quality (CMAQ) hemispheric model and compare model predictions to those obtained using the existing Carbon Bond chemical mechanism with the updated toluene chemistry (CB05TU). Th...

  20. Update on BioVapor Analysis (Draft Deliberative Document)

    EPA Science Inventory

    An update is given on EPA ORD's evaluation of the BioVapor model for petroleum vapor intrusion assessment. Results from two scenarios are presented: a strong petroleum source and a weaker source. Model results for the strong source are shown to depend on biodegradation rate, oxyg...

  1. Anaesthesia for electroconvulsive therapy: An overview with an update on its role in potentiating electroconvulsive therapy

    PubMed Central

    Kadiyala, Pavan Kumar; Kadiyala, Lakshmi Deepthi

    2017-01-01

    Despite advances in pharmacotherapy, electroconvulsive therapy (ECT) remains a mainstay treatment option in psychiatry since its introduction in 1930s. It can be used primarily in severe illnesses when there is an urgent need for treatment or secondarily after failure or intolerance to pharmacotherapy. The 'unmodified' technique of ECT was practised initially, with a high incidence of musculoskeletal complications. Several modifications including general anaesthesia and muscle relaxation are used to increase the safety and patient acceptability of ECT. Various anaesthetic techniques including medications are considered to provide adequate therapeutic seizure, simultaneously controlling seizure-induced haemodynamic changes and side effects. A brief review of literature on choice of these anaesthetic techniques is discussed. This article is intended to reinforce the knowledge of clinicians, who may have limited exposure to ECT procedure. Importance is given to the recent updates on the role of induction agents in potentiating therapeutic response to ECT in psychiatric disorders. PMID:28584345

  2. Efficient Stochastic Rendering of Static and Animated Volumes Using Visibility Sweeps.

    PubMed

    von Radziewsky, Philipp; Kroes, Thomas; Eisemann, Martin; Eisemann, Elmar

    2017-09-01

    Stochastically solving the rendering integral (particularly visibility) is the de-facto standard for physically-based light transport but it is computationally expensive, especially when displaying heterogeneous volumetric data. In this work, we present efficient techniques to speed-up the rendering process via a novel visibility-estimation method in concert with an unbiased importance sampling (involving environmental lighting and visibility inside the volume), filtering, and update techniques for both static and animated scenes. Our major contributions include a progressive estimate of partial occlusions based on a fast sweeping-plane algorithm. These occlusions are stored in an octahedral representation, which can be conveniently transformed into a quadtree-based hierarchy suited for a joint importance sampling. Further, we propose sweep-space filtering, which suppresses the occurrence of fireflies and investigate different update schemes for animated scenes. Our technique is unbiased, requires little precomputation, is highly parallelizable, and is applicable to a various volume data sets, dynamic transfer functions, animated volumes and changing environmental lighting.

  3. Associative Memory Synthesis, Performance, Storage Capacity And Updating: New Heteroassociative Memory Results

    NASA Astrophysics Data System (ADS)

    Casasent, David; Telfer, Brian

    1988-02-01

    The storage capacity, noise performance, and synthesis of associative memories for image analysis are considered. Associative memory synthesis is shown to be very similar to that of linear discriminant functions used in pattern recognition. These lead to new associative memories and new associative memory synthesis and recollection vector encodings. Heteroassociative memories are emphasized in this paper, rather than autoassociative memories, since heteroassociative memories provide scene analysis decisions, rather than merely enhanced output images. The analysis of heteroassociative memories has been given little attention. Heteroassociative memory performance and storage capacity are shown to be quite different from those of autoassociative memories, with much more dependence on the recollection vectors used and less dependence on M/N. This allows several different and preferable synthesis techniques to be considered for associative memories. These new associative memory synthesis techniques and new techniques to update associative memories are included. We also introduce a new SNR performance measure that is preferable to conventional noise standard deviation ratios.

  4. Summary: Update to ASTM guide E 1523 to charge control and charge referencing techniques in x-ray photoelectron spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baer, D.R.

    2005-05-01

    An updated version of the American Society for Testing and Materials (ASTM) guide E 1523 to the methods to charge control and charge referencing techniques in x-ray photoelectron spectroscopy has been released by ASTM [Annual Book of ASTM Standards Surface Analysis (American Society for Testing and Materials, West Conshohocken, PA, 2004), Vol. 03.06]. The guide is meant to acquaint x-ray photoelectron spectroscopy (XPS) users with the various charge control and charge referencing techniques that are and have been used in the acquisition and interpretation of XPS data from surfaces of insulating specimens. The current guide has been expanded to includemore » new references as well as recommendations for reporting information on charge control and charge referencing. The previous version of the document had been published in 1997 [D. R. Baer and K. D. Bomben, J. Vac. Sci. Technol. A 16, 754 (1998)].« less

  5. The hourly updated US High-Resolution Rapid Refresh (HRRR) storm-scale forecast model

    NASA Astrophysics Data System (ADS)

    Alexander, Curtis; Dowell, David; Benjamin, Stan; Weygandt, Stephen; Olson, Joseph; Kenyon, Jaymes; Grell, Georg; Smirnova, Tanya; Ladwig, Terra; Brown, John; James, Eric; Hu, Ming

    2016-04-01

    The 3-km convective-allowing High-Resolution Rapid Refresh (HRRR) is a US NOAA hourly updating weather forecast model that use a specially configured version of the Advanced Research WRF (ARW) model and assimilate many novel and most conventional observation types on an hourly basis using Gridpoint Statistical Interpolation (GSI). Included in this assimilation is a procedure for initializing ongoing precipitation systems from observed radar reflectivity data (and proxy reflectivity from lightning and satellite data), a cloud analysis to initialize stable layer clouds from METAR and satellite observations, and special techniques to enhance retention of surface observation information. The HRRR is run hourly out to 15 forecast hours over a domain covering the entire conterminous United States using initial and boundary conditions from the hourly-cycled 13km Rapid Refresh (RAP, using similar physics and data assimilation) covering North America and a significant part of the Northern Hemisphere. The HRRR is continually developed and refined at NOAA's Earth System Research Laboratory, and an initial version was implemented into the operational NOAA/NCEP production suite in September 2014. Ongoing experimental RAP and HRRR model development throughout 2014 and 2015 has culminated in a set of data assimilation and model enhancements that will be incorporated into the first simultaneous upgrade of both the operational RAP and HRRR that is scheduled for spring 2016 at NCEP. This presentation will discuss the operational RAP and HRRR changes contained in this upgrade. The RAP domain is being expanded to encompass the NAM domain and the forecast lengths of both the RAP and HRRR are being extended. RAP and HRRR assimilation enhancements have focused on (1) extending surface data assimilation to include mesonet observations and improved use of all surface observations through better background estimates of 2-m temperature and dewpoint including projection of 2-m temperature observations through the model boundary layer and (2) extending the use of radar observations to include both radial velocity and 3-D retrieval of rain hydrometeors from observed radar reflectivities in the warm-season. The RAP hybrid EnKF 3D-variational data assimilation will increase weighting of GFS ensemble-based background error covariance estimation and introduce this hybrid data assimilation configuration in the HRRR. Enhancement of RAP and HRRR model physics include improved land surface and boundary layer prediction using the updated Mellor-Yamada-Nakanishi-Niino (MYNN) parameterization scheme, Grell-Freitas-Olson (GFO) shallow and deep convective parameterization, aerosol-aware Thompson microphysics and upgraded Rapid Update Cycle (RUC) land-surface model. The presentation will highlight improvements in the RAP and HRRR model physics to reduce certain systematic forecast biases including a warm and dry daytime bias over the central and eastern CONUS during the warm season along with improved convective forecasts in more weakly-forced diurnally-driven events. Examples of RAP and HRRR forecast improvements will be demonstrated through both retrospective and real-time verification statistics and case-study examples.

  6. Comparison of measured and calculated dynamic loads for the Mod-2 2.5 mW wind turbine system

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. K.; Shipley, S. A.; Miller, R. D.

    1995-01-01

    The Boeing Company, under contract to the Electric Power Research Institute (EPRI), has completed a test program on the Mod-2 wind turbines at Goodnoe Hills, Washington. The objectives were to update fatigue load spectra, discern site and machine differences, measure vortex generator effects, and to evaluate rotational sampling techniques. This paper shows the test setup and loads instrumentation, loads data comparisons and test/analysis correlations. Test data are correlated with DYLOSAT predictions using both the NASA interim turbulence model and rotationally sampled winds as inputs. The latter is demonstrated to have the potential to improve the test/analysis correlations. The paper concludes with an assessment of the importance of vortex generators, site dependence, and machine differences on fatigue loads. The adequacy of prediction techniques used are evaluated and recommendations are made for improvements to the methodology.

  7. Demonstration of frequency-sweep testing technique using a Bell 214-ST helicopter

    NASA Technical Reports Server (NTRS)

    Tischler, Mark B.; Fletcher, Jay W.; Diekmann, Vernon L.; Williams, Robert A.; Cason, Randall W.

    1987-01-01

    A demonstration of frequency-sweep testing using a Bell-214ST single-rotor helicopter was completed in support of the Army's development of an updated MIL-H-8501A, and an LHX (ADS-33) handling-qualities specification. Hover and level-flight (V sub a = 0 knots and V sub a = 90 knots) tests were conducted in 3 flight hours by Army test pilots at the Army Aviation Engineering Flight Activity (AEFA) at Edwards AFB, Calif. Bandwidth and phase-delay parameters were determined from the flight-extracted frequency responses as required by the proposed specifications. Transfer function modeling and verification demonstrates the validity of the frequency-response concept for characterizing closed-loop flight dynamics of single-rotor helicopters -- even in hover. This report documents the frequency-sweep flight-testing technique and data-analysis procedures. Special emphasis is given to piloting and analysis considerations which are important for demonstrating frequency-domain specification compliance.

  8. 2014 NEPP Tasks Update for Ceramic and Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander A.

    2014-01-01

    Presentation describes recent development in research on MnO2, wet, and polymer tantalum capacitors. Low-voltage failures in multilayer ceramic capacitors and techniques to reveal precious metal electrode (PME) and base metal electrode (BME) capacitors with cracks are discussed. A voltage breakdown technique is suggested to select high quality low-voltage BME ceramic capacitors.

  9. Dental crowns

    MedlinePlus

    ... crowns; Lab-fabricated restoration References Academy of General Dentistry. What are crowns? Updated January 2012. Knowyourteeth.org ... partial coverage restorations. In: Aschheim KW, ed. Esthetic Dentistry: A Clinical Approach to Techniques and Materials . 3rd ...

  10. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2) (External Review Draft)

    EPA Science Inventory

    EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change mod...

  11. Using an ensemble smoother to evaluate parameter uncertainty of an integrated hydrological model of Yanqi basin

    NASA Astrophysics Data System (ADS)

    Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang

    2015-10-01

    Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.

  12. Micrometeoroid and Orbital Debris Risk Assessment With Bumper 3

    NASA Technical Reports Server (NTRS)

    Hyde, J.; Bjorkman, M.; Christiansen, E.; Lear, D.

    2017-01-01

    The Bumper 3 computer code is the primary tool used by NASA for micrometeoroid and orbital debris (MMOD) risk analysis. Bumper 3 (and its predecessors) have been used to analyze a variety of manned and unmanned spacecraft. The code uses NASA's latest micrometeoroid (MEM-R2) and orbital debris (ORDEM 3.0) environment definition models and is updated frequently with ballistic limit equations that describe the hypervelocity impact performance of spacecraft materials. The Bumper 3 program uses these inputs along with a finite element representation of spacecraft geometry to provide a deterministic calculation of the expected number of failures. The Bumper 3 software is configuration controlled by the NASA/JSC Hypervelocity Impact Technology (HVIT) Group. This paper will demonstrate MMOD risk assessment techniques with Bumper 3 used by NASA's HVIT Group. The Permanent Multipurpose Module (PMM) was added to the International Space Station in 2011. A Bumper 3 MMOD risk assessment of this module will show techniques used to create the input model and assign the property IDs. The methodology used to optimize the MMOD shielding for minimum mass while still meeting structural penetration requirements will also be demonstrated.

  13. Rainfall assimilation in RAMS by means of the Kuo parameterisation inversion: method and preliminary results

    NASA Astrophysics Data System (ADS)

    Orlandi, A.; Ortolani, A.; Meneguzzo, F.; Levizzani, V.; Torricella, F.; Turk, F. J.

    2004-03-01

    In order to improve high-resolution forecasts, a specific method for assimilating rainfall rates into the Regional Atmospheric Modelling System model has been developed. It is based on the inversion of the Kuo convective parameterisation scheme. A nudging technique is applied to 'gently' increase with time the weight of the estimated precipitation in the assimilation process. A rough but manageable technique is explained to estimate the partition of convective precipitation from stratiform one, without requiring any ancillary measurement. The method is general purpose, but it is tuned for geostationary satellite rainfall estimation assimilation. Preliminary results are presented and discussed, both through totally simulated experiments and through experiments assimilating real satellite-based precipitation observations. For every case study, Rainfall data are computed with a rapid update satellite precipitation estimation algorithm based on IR and MW satellite observations. This research was carried out in the framework of the EURAINSAT project (an EC research project co-funded by the Energy, Environment and Sustainable Development Programme within the topic 'Development of generic Earth observation technologies', Contract number EVG1-2000-00030).

  14. Neuro-fuzzy control of structures using acceleration feedback

    NASA Astrophysics Data System (ADS)

    Schurter, Kyle C.; Roschke, Paul N.

    2001-08-01

    This paper described a new approach for the reduction of environmentally induced vibration in constructed facilities by way of a neuro-fuzzy technique. The new control technique is presented and tested in a numerical study that involves two types of building models. The energy of each building is dissipated through magnetorheological (MR) dampers whose damping properties are continuously updated by a fuzzy controller. This semi-active control scheme relies on the development of a correlation between the accelerations of the building (controller input) and the voltage applied to the MR damper (controller output). This correlation forms the basis for the development of an intelligent neuro-fuzzy control strategy. To establish a context for assessing the effectiveness of the semi-active control scheme, responses to earthquake excitation are compared with passive strategies that have similar authority for control. According to numerical simulation, MR dampers are less effective control mechanisms than passive dampers with respect to a single degree of freedom (DOF) building model. On the other hand, MR dampers are predicted to be superior when used with multiple DOF structures for reduction of lateral acceleration.

  15. Nonlinear finite element simulation of non-local tension softening for high strength steel material

    NASA Astrophysics Data System (ADS)

    Tong, F. M.

    The capability of current finite element softwares in simulating the stress-strain relation beyond the elastic-plastic region has been limited by the inability for non- positivity in the computational finite elements' stiffness matrixes. Although analysis up to the peak stress has been proved adequate for analysis and design, it provides no indication of the possible failure predicament that is to follow. Therefore an attempt was made to develop a modelling technique capable of capturing the complete stress-deformation response in an analysis beyond the limit point. This proposed model characterizes a cyclic loading and unloading procedure, as observed in a typical laboratory uniaxial cyclic test, along with a series of material properties updates. The Voce equation and a polynomial function were proposed to define the monotonic elastoplastic hardening and softening behaviour respectively. A modified form of the Voce equation was used to capture the reloading response in the softening region. To accommodate the reduced load capacity of the material at each subsequent softening point, an optimization macro was written to control this optimum load at which the material could withstand. This preliminary study has ignored geometrical effect and is thus incapable of capturing the localized necking phenomenon that accompanies many ductile materials. The current softening model is sufficient if a global measure is considered. Several validation cases were performed to investigate the feasibility of the modelling technique and the results have been proved satisfactory. The ANSYS finite element software is used as the platform at which the modelling technique operates.

  16. EDNA: Expert fault digraph analysis using CLIPS

    NASA Technical Reports Server (NTRS)

    Dixit, Vishweshwar V.

    1990-01-01

    Traditionally fault models are represented by trees. Recently, digraph models have been proposed (Sack). Digraph models closely imitate the real system dependencies and hence are easy to develop, validate and maintain. However, they can also contain directed cycles and analysis algorithms are hard to find. Available algorithms tend to be complicated and slow. On the other hand, the tree analysis (VGRH, Tayl) is well understood and rooted in vast research effort and analytical techniques. The tree analysis algorithms are sophisticated and orders of magnitude faster. Transformation of a digraph (cyclic) into trees (CLP, LP) is a viable approach to blend the advantages of the representations. Neither the digraphs nor the trees provide the ability to handle heuristic knowledge. An expert system, to capture the engineering knowledge, is essential. We propose an approach here, namely, expert network analysis. We combine the digraph representation and tree algorithms. The models are augmented by probabilistic and heuristic knowledge. CLIPS, an expert system shell from NASA-JSC will be used to develop a tool. The technique provides the ability to handle probabilities and heuristic knowledge. Mixed analysis, some nodes with probabilities, is possible. The tool provides graphics interface for input, query, and update. With the combined approach it is expected to be a valuable tool in the design process as well in the capture of final design knowledge.

  17. Predicting non-stationary algal dynamics following changes in hydrometeorological conditions using data assimilation techniques

    NASA Astrophysics Data System (ADS)

    Kim, S.; Seo, D. J.

    2017-12-01

    When water temperature (TW) increases due to changes in hydrometeorological conditions, the overall ecological conditions change in the aquatic system. The changes can be harmful to human health and potentially fatal to fish habitat. Therefore, it is important to assess the impacts of thermal disturbances on in-stream processes of water quality variables and be able to predict effectiveness of possible actions that may be taken for water quality protection. For skillful prediction of in-stream water quality processes, it is necessary for the watershed water quality models to be able to reflect such changes. Most of the currently available models, however, assume static parameters for the biophysiochemical processes and hence are not able to capture nonstationaries seen in water quality observations. In this work, we assess the performance of the Hydrological Simulation Program-Fortran (HSPF) in predicting algal dynamics following TW increase. The study area is located in the Republic of Korea where waterway change due to weir construction and drought concurrently occurred around 2012. In this work we use data assimilation (DA) techniques to update model parameters as well as the initial condition of selected state variables for in-stream processes relevant to algal growth. For assessment of model performance and characterization of temporal variability, various goodness-of-fit measures and wavelet analysis are used.

  18. Self-learning Monte Carlo method and cumulative update in fermion systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Junwei; Shen, Huitao; Qi, Yang

    2017-06-07

    In this study, we develop the self-learning Monte Carlo (SLMC) method, a general-purpose numerical method recently introduced to simulate many-body systems, for studying interacting fermion systems. Our method uses a highly efficient update algorithm, which we design and dub “cumulative update”, to generate new candidate configurations in the Markov chain based on a self-learned bosonic effective model. From a general analysis and a numerical study of the double exchange model as an example, we find that the SLMC with cumulative update drastically reduces the computational cost of the simulation, while remaining statistically exact. Remarkably, its computational complexity is far lessmore » than the conventional algorithm with local updates.« less

  19. Statistical Downscaling of WRF-Chem Model: An Air Quality Analysis over Bogota, Colombia

    NASA Astrophysics Data System (ADS)

    Kumar, Anikender; Rojas, Nestor

    2015-04-01

    Statistical downscaling is a technique that is used to extract high-resolution information from regional scale variables produced by coarse resolution models such as Chemical Transport Models (CTMs). The fully coupled WRF-Chem (Weather Research and Forecasting with Chemistry) model is used to simulate air quality over Bogota. Bogota is a tropical Andean megacity located over a high-altitude plateau in the middle of very complex terrain. The WRF-Chem model was adopted for simulating the hourly ozone concentrations. The computational domains were chosen of 120x120x32, 121x121x32 and 121x121x32 grid points with horizontal resolutions of 27, 9 and 3 km respectively. The model was initialized with real boundary conditions using NCAR-NCEP's Final Analysis (FNL) and a 1ox1o (~111 km x 111 km) resolution. Boundary conditions were updated every 6 hours using reanalysis data. The emission rates were obtained from global inventories, namely the REanalysis of the TROpospheric (RETRO) chemical composition and the Emission Database for Global Atmospheric Research (EDGAR). Multiple linear regression and artificial neural network techniques are used to downscale the model output at each monitoring stations. The results confirm that the statistically downscaled outputs reduce simulated errors by up to 25%. This study provides a general overview of statistical downscaling of chemical transport models and can constitute a reference for future air quality modeling exercises over Bogota and other Colombian cities.

  20. Speciation and Toxic Emissions from On road Vehicles, and Particulate Matter Emissions from Light-Duty Gasoline Vehicles in MOVES201X

    EPA Science Inventory

    Updated methane, non-methane organic gas, and volatile organic compound calculations based on speciation data. Updated speciation and toxic emission rates for new model year 2010 and later heavy-duty diesel engines. Updated particulate matter emission rates for 2004 and later mod...

Top