Sample records for element model update

  1. Artificial Boundary Conditions for Finite Element Model Update and Damage Detection

    DTIC Science & Technology

    2017-03-01

    BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION by Emmanouil Damanakis March 2017 Thesis Advisor: Joshua H. Gordis...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ARTIFICIAL BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION...release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In structural engineering, a finite element model is often

  2. Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding

    NASA Astrophysics Data System (ADS)

    Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.

    2018-04-01

    In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.

  3. Detection of Earthquake-Induced Damage in a Framed Structure Using a Finite Element Model Updating Procedure

    PubMed Central

    Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun

    2014-01-01

    Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage. PMID:24574888

  4. Obtaining manufactured geometries of deep-drawn components through a model updating procedure using geometric shape parameters

    NASA Astrophysics Data System (ADS)

    Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan

    2018-01-01

    The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.

  5. A Probabilistic Approach to Model Update

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.

    2001-01-01

    Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.

  6. Model updating strategy for structures with localised nonlinearities using frequency response measurements

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.

    2018-02-01

    This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.

  7. Symmetric tridiagonal structure preserving finite element model updating problem for the quadratic model

    NASA Astrophysics Data System (ADS)

    Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath

    2018-07-01

    One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.

  8. Frequency response function (FRF) based updating of a laser spot welded structure

    NASA Astrophysics Data System (ADS)

    Zin, M. S. Mohd; Rani, M. N. Abdul; Yunus, M. A.; Sani, M. S. M.; Wan Iskandar Mirza, W. I. I.; Mat Isa, A. A.

    2018-04-01

    The objective of this paper is to present frequency response function (FRF) based updating as a method for matching the finite element (FE) model of a laser spot welded structure with a physical test structure. The FE model of the welded structure was developed using CQUAD4 and CWELD element connectors, and NASTRAN was used to calculate the natural frequencies, mode shapes and FRF. Minimization of the discrepancies between the finite element and experimental FRFs was carried out using the exceptional numerical capability of NASTRAN Sol 200. The experimental work was performed under free-free boundary conditions using LMS SCADAS. Avast improvement in the finite element FRF was achieved using the frequency response function (FRF) based updating with two different objective functions proposed.

  9. Updating finite element dynamic models using an element-by-element sensitivity methodology

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Hemez, Francois M.

    1993-01-01

    A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.

  10. Substructure System Identification for Finite Element Model Updating

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.; Blades, Eric L.

    1997-01-01

    This report summarizes research conducted under a NASA grant on the topic 'Substructure System Identification for Finite Element Model Updating.' The research concerns ongoing development of the Substructure System Identification Algorithm (SSID Algorithm), a system identification algorithm that can be used to obtain mathematical models of substructures, like Space Shuttle payloads. In the present study, particular attention was given to the following topics: making the algorithm robust to noisy test data, extending the algorithm to accept experimental FRF data that covers a broad frequency bandwidth, and developing a test analytical model (TAM) for use in relating test data to reduced-order finite element models.

  11. Nonlinear structural joint model updating based on instantaneous characteristics of dynamic responses

    NASA Astrophysics Data System (ADS)

    Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin

    2016-08-01

    This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.

  12. Stochastic filtering for damage identification through nonlinear structural finite element model updating

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.

    2015-03-01

    This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.

  13. Finite element modelling and updating of a lively footbridge: The complete process

    NASA Astrophysics Data System (ADS)

    Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul

    2007-03-01

    The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This interpretation is often missing in the published literature. It was found that the composite slabs were less stiff than originally assumed and that the asphalt layer contributed considerably to the deck stiffness.

  14. Synthetic Modifications In the Frequency Domain for Finite Element Model Update and Damage Detection

    DTIC Science & Technology

    2017-09-01

    Sensitivity-based finite element model updating and structural damage detection has been limited by the number of modes available in a vibration test and...increase the number of modes and corresponding sensitivity data by artificially constraining the structure under test, producing a large number of... structural modifications to the measured data, including both springs-to-ground and mass modifications. This is accomplished with frequency domain

  15. OSATE Overview & Community Updates

    DTIC Science & Technology

    2015-02-15

    update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Delange /Julien 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...main language capabilities Modeling patterns & model samples for beginners Error-Model examples EMV2 model constructs Demonstration of tools Case

  16. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    PubMed Central

    He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei

    2017-01-01

    This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148

  17. A model-updating procedure to stimulate piezoelectric transducers accurately.

    PubMed

    Piranda, B; Ballandras, S; Steichen, W; Hecart, B

    2001-09-01

    The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.

  18. Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, mercedes C.

    2006-01-01

    The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.

  19. Geometry control of long-span continuous girder concrete bridge during construction through finite element model updating

    NASA Astrophysics Data System (ADS)

    Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi

    2016-04-01

    In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.

  20. Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.

    2004-01-01

    This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.

  1. Comparison of two optimization algorithms for fuzzy finite element model updating for damage detection in a wind turbine blade

    NASA Astrophysics Data System (ADS)

    Turnbull, Heather; Omenzetter, Piotr

    2018-03-01

    vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.

  2. Issues concerning the updating of finite-element models from experimental data

    NASA Technical Reports Server (NTRS)

    Dunn, Shane A.

    1994-01-01

    Some issues concerning the updating of dynamic finite-element models by incorporation of experimental data are examined here. It is demonstrated how the number of unknowns can be greatly reduced if the physical nature of the model is maintained. The issue of uniqueness is also examined and it is shown that a number of previous workers have been mistaken in their attempts to define both sufficient and necessary measurement requirements for the updating problem to be solved uniquely. The relative merits of modal and frequency response function (frf) data are discussed and it is shown that for measurements at fewer degrees of freedom than are present in the model, frf data will be unlikely to converge easily to a solution. It is then examined how such problems may become more tractable by using new experimental techniques which would allow measurements at all degrees of freedom present in the mathematical model.

  3. A Finite Element Model of the THOR-K Dummy for Aerospace and Aircraft Impact Simulations

    NASA Technical Reports Server (NTRS)

    Putnam, Jacob; Untaroiu, Costin D.; Somers, Jeffrey T.; Pellettiere, Joseph

    2013-01-01

    1) Update and Improve the THOR Finite Element (FE) model to specifications of the latest mod kit (THOR-K). 2) Evaluate the kinematic and kinetic response of the FE model in frontal, spinal, and lateral impact loading conditions.

  4. XFEM-based modeling of successive resections for preoperative image updating

    NASA Astrophysics Data System (ADS)

    Vigneron, Lara M.; Robe, Pierre A.; Warfield, Simon K.; Verly, Jacques G.

    2006-03-01

    We present a new method for modeling organ deformations due to successive resections. We use a biomechanical model of the organ, compute its volume-displacement solution based on the eXtended Finite Element Method (XFEM). The key feature of XFEM is that material discontinuities induced by every new resection can be handled without remeshing or mesh adaptation, as would be required by the conventional Finite Element Method (FEM). We focus on the application of preoperative image updating for image-guided surgery. Proof-of-concept demonstrations are shown for synthetic and real data in the context of neurosurgery.

  5. Damage severity assessment in wind turbine blade laboratory model through fuzzy finite element model updating

    NASA Astrophysics Data System (ADS)

    Turnbull, Heather; Omenzetter, Piotr

    2017-04-01

    The recent shift towards development of clean, sustainable energy sources has provided a new challenge in terms of structural safety and reliability: with aging, manufacturing defects, harsh environmental and operational conditions, and extreme events such as lightning strikes wind turbines can become damaged resulting in production losses and environmental degradation. To monitor the current structural state of the turbine, structural health monitoring (SHM) techniques would be beneficial. Physics based SHM in the form of calibration of a finite element model (FEMs) by inverse techniques is adopted in this research. Fuzzy finite element model updating (FFEMU) techniques for damage severity assessment of a small-scale wind turbine blade are discussed and implemented. The main advantage is the ability of FFEMU to account in a simple way for uncertainty within the problem of model updating. Uncertainty quantification techniques, such as fuzzy sets, enable a convenient mathematical representation of the various uncertainties. Experimental frequencies obtained from modal analysis on a small-scale wind turbine blade were described by fuzzy numbers to model measurement uncertainty. During this investigation, damage severity estimation was investigated through addition of small masses of varying magnitude to the trailing edge of the structure. This structural modification, intended to be in lieu of damage, enabled non-destructive experimental simulation of structural change. A numerical model was constructed with multiple variable additional masses simulated upon the blades trailing edge and used as updating parameters. Objective functions for updating were constructed and minimized using both particle swarm optimization algorithm and firefly algorithm. FFEMU was able to obtain a prediction of baseline material properties of the blade whilst also successfully predicting, with sufficient accuracy, a larger magnitude of structural alteration and its location.

  6. Normal response function method for mass and stiffness matrix updating using complex FRFs

    NASA Astrophysics Data System (ADS)

    Pradhan, S.; Modak, S. V.

    2012-10-01

    Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.

  7. The Cancer Family Caregiving Experience: An Updated and Expanded Conceptual Model

    PubMed Central

    Fletcher, Barbara Swore; Miaskowski, Christine; Given, Barbara; Schumacher, Karen

    2011-01-01

    Objective The decade from 2000–2010 was an era of tremendous growth in family caregiving research specific to the cancer population. This research has implications for how cancer family caregiving is conceptualized, yet the most recent comprehensive model of cancer family caregiving was published ten years ago. Our objective was to develop an updated and expanded comprehensive model of the cancer family caregiving experience, derived from concepts and variables used in research during past ten years. Methods A conceptual model was developed based on cancer family caregiving research published from 2000–2010. Results Our updated and expanded model has three main elements: 1) the stress process, 2) contextual factors, and 3) the cancer trajectory. Emerging ways of conceptualizing the relationships between and within model elements are addressed, as well as an emerging focus on caregiver-patient dyads as the unit of analysis. Conclusions Cancer family caregiving research has grown dramatically since 2000 resulting in a greatly expanded conceptual landscape. This updated and expanded model of the cancer family caregiving experience synthesizes the conceptual implications of an international body of work and demonstrates tremendous progress in how cancer family caregiving research is conceptualized. PMID:22000812

  8. Study on Finite Element Model Updating in Highway Bridge Static Loading Test Using Spatially-Distributed Optical Fiber Sensors

    PubMed Central

    Wu, Bitao; Lu, Huaxi; Chen, Bo; Gao, Zhicheng

    2017-01-01

    A finite model updating method that combines dynamic-static long-gauge strain responses is proposed for highway bridge static loading tests. For this method, the objective function consisting of static long-gauge stains and the first order modal macro-strain parameter (frequency) is established, wherein the local bending stiffness, density and boundary conditions of the structures are selected as the design variables. The relationship between the macro-strain and local element stiffness was studied first. It is revealed that the macro-strain is inversely proportional to the local stiffness covered by the long-gauge strain sensor. This corresponding relation is important for the modification of the local stiffness based on the macro-strain. The local and global parameters can be simultaneously updated. Then, a series of numerical simulation and experiments were conducted to verify the effectiveness of the proposed method. The results show that the static deformation, macro-strain and macro-strain modal can be predicted well by using the proposed updating model. PMID:28753912

  9. Study on Finite Element Model Updating in Highway Bridge Static Loading Test Using Spatially-Distributed Optical Fiber Sensors.

    PubMed

    Wu, Bitao; Lu, Huaxi; Chen, Bo; Gao, Zhicheng

    2017-07-19

    A finite model updating method that combines dynamic-static long-gauge strain responses is proposed for highway bridge static loading tests. For this method, the objective function consisting of static long-gauge stains and the first order modal macro-strain parameter (frequency) is established, wherein the local bending stiffness, density and boundary conditions of the structures are selected as the design variables. The relationship between the macro-strain and local element stiffness was studied first. It is revealed that the macro-strain is inversely proportional to the local stiffness covered by the long-gauge strain sensor. This corresponding relation is important for the modification of the local stiffness based on the macro-strain. The local and global parameters can be simultaneously updated. Then, a series of numerical simulation and experiments were conducted to verify the effectiveness of the proposed method. The results show that the static deformation, macro-strain and macro-strain modal can be predicted well by using the proposed updating model.

  10. An Updated AP2 Beamline TURTLE Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gormley, M.; O'Day, S.

    1991-08-23

    This note describes a TURTLE model of the AP2 beamline. This model was created by D. Johnson and improved by J. Hangst. The authors of this note have made additional improvements which reflect recent element and magnet setting changes. The magnet characteristics measurements and survey data compiled to update the model will be presented. A printout of the actual TURTLE deck may be found in appendix A.

  11. Application of firefly algorithm to the dynamic model updating problem

    NASA Astrophysics Data System (ADS)

    Shabbir, Faisal; Omenzetter, Piotr

    2015-04-01

    Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.

  12. Identification of material parameters for plasticity models: A comparative study on the finite element model updating and the virtual fields method

    NASA Astrophysics Data System (ADS)

    Martins, J. M. P.; Thuillier, S.; Andrade-Campos, A.

    2018-05-01

    The identification of material parameters, for a given constitutive model, can be seen as the first step before any practical application. In the last years, the field of material parameters identification received an important boost with the development of full-field measurement techniques, such as Digital Image Correlation. These techniques enable the use of heterogeneous displacement/strain fields, which contain more information than the classical homogeneous tests. Consequently, different techniques have been developed to extract material parameters from full-field measurements. In this study, two of these techniques are addressed, the Finite Element Model Updating (FEMU) and the Virtual Fields Method (VFM). The main idea behind FEMU is to update the parameters of a constitutive model implemented in a finite element model until both numerical and experimental results match, whereas VFM makes use of the Principle of Virtual Work and does not require any finite element simulation. Though both techniques proved their feasibility in linear and non-linear constitutive models, it is rather difficult to rank their robustness in plasticity. The purpose of this work is to perform a comparative study in the case of elasto-plastic models. Details concerning the implementation of each strategy are presented. Moreover, a dedicated code for VFM within a large strain framework is developed. The reconstruction of the stress field is performed through a user subroutine. A heterogeneous tensile test is considered to compare FEMU and VFM strategies.

  13. Updated Lagrangian finite element formulations of various biological soft tissue non-linear material models: a comprehensive procedure and review.

    PubMed

    Townsend, Molly T; Sarigul-Klijn, Nesrin

    2016-01-01

    Simplified material models are commonly used in computational simulation of biological soft tissue as an approximation of the complicated material response and to minimize computational resources. However, the simulation of complex loadings, such as long-duration tissue swelling, necessitates complex models that are not easy to formulate. This paper strives to offer the updated Lagrangian formulation comprehensive procedure of various non-linear material models for the application of finite element analysis of biological soft tissues including a definition of the Cauchy stress and the spatial tangential stiffness. The relationships between water content, osmotic pressure, ionic concentration and the pore pressure stress of the tissue are discussed with the merits of these models and their applications.

  14. Updating the Finite Element Model of the Aerostructures Test Wing Using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Lung, Shun-Fat; Pak, Chan-Gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the aerostructures test wing (ATW), which was designed and tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  15. Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2009-01-01

    Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.

  16. Off-Highway Gasoline Consuption Estimation Models Used in the Federal Highway Administration Attribution Process: 2008 Updates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, Ho-Ling; Davis, Stacy Cagle

    2009-12-01

    This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the secondmore » major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that is possible on the overall totals, to the current FHWA estimates. Because NONROAD2005 model was designed for emission estimation purposes (i.e., not for measuring fuel consumption), it covers different equipment populations from those the FHWA models were based on. Thus, a direct comparison generally was not possible in most sectors. As a result, NONROAD2005 data were not used in the 2008 update of the FHWA off-highway models. The quality of fuel use estimates directly affect the data quality in many tables published in the Highway Statistics. Although updates have been made to the Off-Highway Gasoline Use Model and the Public Use Gasoline Model, some challenges remain due to aging model equations and discontinuation of data sources.« less

  17. Numerical modeling and model updating for smart laminated structures with viscoelastic damping

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan

    2018-07-01

    This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.

  18. SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating

    PubMed Central

    Lee, Young-Joo; Cho, Soojin

    2016-01-01

    Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125

  19. Model Updating of Complex Structures Using the Combination of Component Mode Synthesis and Kriging Predictor

    PubMed Central

    Li, Yan; Wang, Dejun; Zhang, Shaoyi

    2014-01-01

    Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters. PMID:24634612

  20. Multi-level damage identification with response reconstruction

    NASA Astrophysics Data System (ADS)

    Zhang, Chao-Dong; Xu, You-Lin

    2017-10-01

    Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.

  1. Damage identification of a reinforced concrete frame by finite element model updating using damage parameterization

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Perera, Ricardo; De Roeck, Guido

    2008-06-01

    This paper develops a sensitivity-based updating method to identify the damage in a tested reinforced concrete (RC) frame modeled with a two-dimensional planar finite element (FE) by minimizing the discrepancies of modal frequencies and mode shapes. In order to reduce the number of unknown variables, a bidimensional damage (element) function is proposed, resulting in a considerable improvement of the optimization performance. For damage identification, a reference FE model of the undamaged frame divided into a few damage functions is firstly obtained and then a rough identification is carried out to detect possible damage locations, which are subsequently refined with new damage functions to accurately identify the damage. From a design point of view, it would be useful to evaluate, in a simplified way, the remaining bending stiffness of cracked beam sections or segments. Hence, an RC damage model based on a static mechanism is proposed to estimate the remnant stiffness of a cracked RC beam segment. The damage model is based on the assumption that the damage effect spreads over a region and the stiffness in the segment changes linearly. Furthermore, the stiffness reduction evaluated using this damage model is compared with the FE updating result. It is shown that the proposed bidimensional damage function is useful in producing a well-conditioned optimization problem and the aforementioned damage model can be used for an approximate stiffness estimation of a cracked beam segment.

  2. Dynamic model updating based on strain mode shape and natural frequency using hybrid pattern search technique

    NASA Astrophysics Data System (ADS)

    Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping

    2018-05-01

    Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.

  3. Model updating in flexible-link multibody systems

    NASA Astrophysics Data System (ADS)

    Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.

    2016-09-01

    The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.

  4. Nonlinear Pressurization and Modal Analysis Procedure for Dynamic Modeling of Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Saxon, Jeff (Technical Monitor)

    2002-01-01

    An introduction and set of guidelines for finite element dynamic modeling of nonrigidized inflatable structures is provided. A two-step approach is presented, involving 1) nonlinear static pressurization of the structure and updating of the stiffness matrix and 2) hear normal modes analysis using the updated stiffness. Advantages of this approach are that it provides physical realism in modeling of pressure stiffening, and it maintains the analytical convenience of a standard bear eigensolution once the stiffness has been modified. Demonstration of the approach is accomplished through the creation and test verification of an inflated cylinder model using a large commercial finite element code. Good frequency and mode shape comparisons are obtained with test data and previous modeling efforts, verifying the accuracy of the technique. Problems encountered in the application of the approach, as well as their solutions, are discussed in detail.

  5. An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating

    NASA Astrophysics Data System (ADS)

    Ratcliffe, M. J.; Lieven, N. A. J.

    1999-03-01

    Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.

  6. Finite element model updating of a prestressed concrete box girder bridge using subproblem approximation

    NASA Astrophysics Data System (ADS)

    Chen, G. W.; Omenzetter, P.

    2016-04-01

    This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.

  7. Build-Up Approach to Updating the Mock Quiet Spike Beam Model

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia Y.; Pak, Chan-gi

    2007-01-01

    When a new aircraft is designed or a modification is done to an existing aircraft, the aeroelastic properties of the aircraft should be examined to ensure the aircraft is flight worthy. Evaluating the aeroelastic properties of a new or modified aircraft can include performing a variety of analyses, such as modal and flutter analyses. In order to produce accurate results from these analyses, it is imperative to work with finite element models (FEM) that have been validated by or correlated to ground vibration test (GVT) data, Updating an analytical model using measured data is a challenge in the area of structural dynamics. The analytical model update process encompasses a series of optimizations that match analytical frequencies and mode shapes to the measured modal characteristics of structure. In the past, the method used to update a model to test data was "trial and error." This is an inefficient method - running a modal analysis, comparing the analytical results to the GVT data, manually modifying one or more structural parameters (mass, CG, inertia, area, etc.), rerunning the analysis, and comparing the new analytical modal characteristics to the GVT modal data. If the match is close enough (close enough defined by analyst's updating requirements), then the updating process is completed. If the match does not meet updating-requirements, then the parameters are changed again and the process is repeated. Clearly, this manual optimization process is highly inefficient for large FEM's and/or a large number of structural parameters. NASA Dryden Flight Research Center (DFRC) has developed, in-house, a Mode Matching Code that automates the above-mentioned optimization process, DFRC's in-house Mode Matching Code reads mode shapes and frequencies acquired from GVT to create the target model. It also reads the current analytical model, as we11 as the design variables and their upper and lower limits. It performs a modal analysis on this model and modifies it to create an updated model that has similar mode shapes and frequencies as those of the target model. The Mode Matching Code output frequencies and modal assurance criteria (MAC) values that allow for the quantified comparison of the updated model versus the target model. A recent application of this code is the F453 supersonic flight testing platform, NASA DFRC possesses a modified F-15B that is used as a test bed aircraft for supersonic flight experiments. Traditionally, the finite element model of the test article is generated. A GVT is done on the test article ta validate and update its FEM. This FEM is then mated to the F-15B model, which was correlated to GVT data in fall of 2004, A GVT is conducted with the test article mated to the aircraft, and this mated F-15B/ test article FEM is correlated to this final GVT.

  8. Neurosurgery simulation using non-linear finite element modeling and haptic interaction

    NASA Astrophysics Data System (ADS)

    Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet

    2012-02-01

    Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.

  9. Vibration analysis of resistance spot welding joint for dissimilar plate structure (mild steel 1010 and stainless steel 304)

    NASA Astrophysics Data System (ADS)

    Sani, M. S. M.; Nazri, N. A.; Alawi, D. A. J.

    2017-09-01

    Resistance spot welding (RSW) is a proficient joining method commonly used for sheet metal joining and become one of the oldest spot welding processes use in industry especially in the automotive. RSW involves the application of heat and pressure without neglecting time taken when joining two or more metal sheets at a localized area which is claimed as the most efficient welding process in metal fabrication. The purpose of this project is to perform model updating of RSW plate structure between mild steel 1010 and stainless steel 304. In order to do the updating, normal mode finite element analysis (FEA) and experimental modal analysis (EMA) have been carried out. Result shows that the discrepancies of natural frequency between FEA and EMA are below than 10 %. Sensitivity model updating is evaluated in order to make sure which parameters are influences in this structural dynamic modification. Young’s modulus and density both materials are indicate significant parameters to do model updating. As a conclusion, after perform model updating, total average error of dissimilar RSW plate is improved significantly.

  10. Techniques of orbital decay and long-term ephemeris prediction for satellites in earth orbit

    NASA Technical Reports Server (NTRS)

    Barry, B. F.; Pimm, R. S.; Rowe, C. K.

    1971-01-01

    In the special perturbation method, Cowell and variation-of-parameters formulations of the motion equations are implemented and numerically integrated. Variations in the orbital elements due to drag are computed using the 1970 Jacchia atmospheric density model, which includes the effects of semiannual variations, diurnal bulge, solar activity, and geomagnetic activity. In the general perturbation method, two-variable asymptotic series and automated manipulation capabilities are used to obtain analytical solutions to the variation-of-parameters equations. Solutions are obtained considering the effect of oblateness only and the combined effects of oblateness and drag. These solutions are then numerically evaluated by means of a FORTRAN program in which an updating scheme is used to maintain accurate epoch values of the elements. The atmospheric density function is approximated by a Fourier series in true anomaly, and the 1970 Jacchia model is used to periodically update the Fourier coefficients. The accuracy of both methods is demonstrated by comparing computed orbital elements to actual elements over time spans of up to 8 days for the special perturbation method and up to 356 days for the general perturbation method.

  11. Identification of cracks in thick beams with a cracked beam element model

    NASA Astrophysics Data System (ADS)

    Hou, Chuanchuan; Lu, Yong

    2016-12-01

    The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.

  12. Rapid Structural Design Change Evaluation with AN Experiment Based FEM

    NASA Astrophysics Data System (ADS)

    Chu, C.-H.; Trethewey, M. W.

    1998-04-01

    The work in this paper proposes a dynamic structural design model that can be developed in a rapid fashion. The approach endeavours to produce a simplified FEM developed in conjunction with an experimental modal database. The FEM is formulated directly from the geometry and connectivity used in an experimental modal test using beam/frame elements. The model sacrifices fine detail for a rapid development time. The FEM is updated at the element level so the dynamic response replicates the experimental results closely. The physical attributes of the model are retained, making it well suited to evaluate the effect of potential design changes. The capabilities are evaluated in a series of computational and laboratory tests. First, a study is performed with a simulated cantilever beam with a variable mass and stiffness distribution. The modal characteristics serve as the updating target with random noise added to simulate experimental uncertainty. A uniformly distributed FEM is developed and updated. The results show excellent results, all natural frequencies are within 0·001% with MAC values above 0·99. Next, the method is applied to predict the dynamic changes of a hardware portal frame structure for a radical design change. Natural frequency predictions from the original FEM differ by as much as almost 18% with reasonable MAC values. The results predicted from the updated model produce excellent results when compared to the actual hardware changes, the first five modal natural frequency difference is around 5% and the corresponding mode shapes producing MAC values above 0·98.

  13. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  14. Badhwar - O'Neill galactic cosmic ray model update based on Advanced Composition Explorer (ACE) energy spectra from 1997 to present

    NASA Astrophysics Data System (ADS)

    O'Neill, P.

    Accurate knowledge of the interplanetary Galactic Cosmic Ray (GCR) environment is critical to planning and operating manned space flight to the moon and beyond. In the early 1990's Badhwar and O'Neill developed a GCR model based on balloon and satellite data from 1954 to 1992. This model accurately accounts for solar modulation of each element (hydrogen -- iron) by propagating the Local Interplanetary Spectrum (LIS) of each element through the heliosphere by solving the Fokker -- Planck diffusion, convection, energy loss boundary value problem. A single value of the deceleration parameter describes the modulation of each of the elements and determines the GCR energy spectrum at any distance from the sun for a given level of solar cycle modulation. Since August 1997 the Advanced Composition Explorer (ACE) stationed at the Earth-Sun L1 libration point (about 1.5 million km from earth) has provided GCR energy spectra for boron - nickel. The Cosmic Ray Isotope Spectrometer (CRIS) provides ``quiet time'' spectra in the range of highest modulation ˜ 50 -- 500 MeV / nucleon. The collection power of CRIS is much larger than any of the previous satellite or balloon GCR instruments: 250 cm**2 --sr compared to <10 cm**2-sr! This new data was used to update the original Badhwar -- O'Neill Model and greatly improve the interplanetary GCR prediction accuracy. When the new -- highly precise ACE CRIS data was analyzed it became obvious that the LIS spectrum for each element precisely fit a very simple analytical energy power-law that was suggested by Leonard Fisk over 30 years ago. The updated Badhwar -- O'Neill Model is shown to be accurate to within 5%, for elements such as oxygen, which have sufficient abundance that over 1000 ions are captured in each energy bin within a 30 day period. The paper clearly demonstrates the statistical relationship between the number of ions captured by the instrument in a given time and the precision of the model for each element. This is a significant model upgrade that should provide interplanetary mission planners with highly accurate GCR environment data for radiation protection for astronauts and radiation hardness assurance for electronic equipment.

  15. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  16. Finite element model updating of riveted joints of simplified model aircraft structure

    NASA Astrophysics Data System (ADS)

    Yunus, M. A.; Rani, M. N. Abdul; Sani, M. S. M.; Shah, M. A. S. Aziz

    2018-04-01

    Thin metal sheets are widely used to fabricate a various type of aerospace structures because of its flexibility and easily to form into any type shapes of structure. The riveted joint has turn out to be one of the popular joint types in jointing the aerospace structures because they can be easily be disassembled, maintained and inspected. In this paper, thin metal sheet components are assembled together via riveted joints to form a simplified model of aerospace structure. However, to model the jointed structure that are attached together via the mechanical joints such as riveted joint are very difficult due to local effects. Understandably that the dynamic characteristic of the joined structure can be significantly affected by these joints due to local effects at the mating areas of the riveted joints such as surface contact, clamping force and slips. A few types of element connectors that available in MSC NATRAN/PATRAN have investigated in order to presented as the rivet joints. Thus, the results obtained in term of natural frequencies and mode shapes are then contrasted with experimental counterpart in order to investigate the acceptance level of accuracy between element connectors that are used in modelling the rivet joints of the riveted joints structure. The reconciliation method via finiteelement model updating is used to minimise the discrepancy of the initial finite element model of the riveted joined structure as close as experimental data and their results are discussed.

  17. Description and evaluation of the Community Multiscale Air ...

    EPA Pesticide Factsheets

    The Community Multiscale Air Quality (CMAQ) model is a comprehensive multipollutant air quality modeling system developed and maintained by the US Environmental Protection Agency's (EPA) Office of Research and Development (ORD). Recently, version 5.1 of the CMAQ model (v5.1) was released to the public, incorporating a large number of science updates and extended capabilities over the previous release version of the model (v5.0.2). These updates include the following: improvements in the meteorological calculations in both CMAQ and the Weather Research and Forecast (WRF) model used to provide meteorological fields to CMAQ, updates to the gas and aerosol chemistry, revisions to the calculations of clouds and photolysis, and improvements to the dry and wet deposition in the model. Sensitivity simulations isolating several of the major updates to the modeling system show that changes to the meteorological calculations result in enhanced afternoon and early evening mixing in the model, periods when the model historically underestimates mixing. This enhanced mixing results in higher ozone (O3) mixing ratios on average due to reduced NO titration, and lower fine particulate matter (PM2. 5) concentrations due to greater dilution of primary pollutants (e.g., elemental and organic carbon). Updates to the clouds and photolysis calculations greatly improve consistency between the WRF and CMAQ models and result in generally higher O3 mixing ratios, primarily due to reduced

  18. Computer animation of modal and transient vibrations

    NASA Technical Reports Server (NTRS)

    Lipman, Robert R.

    1987-01-01

    An interactive computer graphics processor is described that is capable of generating input to animate modal and transient vibrations of finite element models on an interactive graphics system. The results from NASTRAN can be postprocessed such that a three dimensional wire-frame picture, in perspective, of the finite element mesh is drawn on the graphics display. Modal vibrations of any mode shape or transient motions over any range of steps can be animated. The finite element mesh can be color-coded by any component of displacement. Viewing parameters and the rate of vibration of the finite element model can be interactively updated while the structure is vibrating.

  19. Rare Earth Element Concentration of Wyoming Thermal Waters Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quillinan, Scott; Nye, Charles; Neupane, Hari

    Updated version of data generated from rare earth element investigation of produced waters. These data represent major, minor, trace, isotopes, and rare earth element concentrations in geologic formations and water associated with oil and gas production.

  20. Selection of experimental modal data sets for damage detection via model update

    NASA Technical Reports Server (NTRS)

    Doebling, S. W.; Hemez, F. M.; Barlow, M. S.; Peterson, L. D.; Farhat, C.

    1993-01-01

    When using a finite element model update algorithm for detecting damage in structures, it is important that the experimental modal data sets used in the update be selected in a coherent manner. In the case of a structure with extremely localized modal behavior, it is necessary to use both low and high frequency modes, but many of the modes in between may be excluded. In this paper, we examine two different mode selection strategies based on modal strain energy, and compare their success to the choice of an equal number of modes based merely on lowest frequency. Additionally, some parameters are introduced to enable a quantitative assessment of the success of our damage detection algorithm when using the various set selection criteria.

  1. Seismological comparisons of solar models with element diffusion using the MHD, OPAL, and SIREFF equations of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzik, J.A.; Swenson, F.J.

    We compare the thermodynamic and helioseismic properties of solar models evolved using three different equation of state (EOS) treatments: the Mihalas, D{umlt a}ppen & Hummer EOS tables (MHD); the latest Rogers, Swenson, & Iglesias EOS tables (OPAL), and a new analytical EOS (SIREFF) developed by Swenson {ital et al.} All of the models include diffusive settling of helium and heavier elements. The models use updated OPAL opacity tables based on the 1993 Grevesse & Noels solar element mixture, incorporating 21 elements instead of the 14 elements used for earlier tables. The properties of solar models that are evolved with themore » SIREFF EOS agree closely with those of models evolved using the OPAL or MHD tables. However, unlike the MHD or OPAL EOS tables, the SIREFF in-line EOS can readily account for variations in overall Z abundance and the element mixture resulting from nuclear processing and diffusive element settling. Accounting for Z abundance variations in the EOS has a small, but non-negligible, effect on model properties (e.g., pressure or squared sound speed), as much as 0.2{percent} at the solar center and in the convection zone. The OPAL and SIREFF equations of state include electron exchange, which produces models requiring a slightly higher initial helium abundance, and increases the convection zone depth compared to models using the MHD EOS. However, the updated OPAL opacities are as much as 5{percent} lower near the convection zone base, resulting in a small decrease in convection zone depth. The calculated low-degree nonadiabatic frequencies for all of the models agree with the observed frequencies to within a few microhertz (0.1{percent}). The SIREFF analytical calibrations are intended to work over a wide range of interior conditions found in stellar models of mass greater than 0.25M{sub {circle_dot}} and evolutionary states from pre-main-sequence through the asymptotic giant branch (AGB). It is significant that the SIREFF EOS produces solar models that both measure up to the stringent requirements imposed by solar oscillation observations and inferences, and are more versatile than EOS tables. {copyright} {ital 1997} {ital The American Astronomical Society}« less

  2. The Gypsy Database (GyDB) of mobile genetic elements: release 2.0

    PubMed Central

    Llorens, Carlos; Futami, Ricardo; Covelli, Laura; Domínguez-Escribá, Laura; Viu, Jose M.; Tamarit, Daniel; Aguilar-Rodríguez, Jose; Vicente-Ripolles, Miguel; Fuster, Gonzalo; Bernet, Guillermo P.; Maumus, Florian; Munoz-Pomer, Alfonso; Sempere, Jose M.; Latorre, Amparo; Moya, Andres

    2011-01-01

    This article introduces the second release of the Gypsy Database of Mobile Genetic Elements (GyDB 2.0): a research project devoted to the evolutionary dynamics of viruses and transposable elements based on their phylogenetic classification (per lineage and protein domain). The Gypsy Database (GyDB) is a long-term project that is continuously progressing, and that owing to the high molecular diversity of mobile elements requires to be completed in several stages. GyDB 2.0 has been powered with a wiki to allow other researchers participate in the project. The current database stage and scope are long terminal repeats (LTR) retroelements and relatives. GyDB 2.0 is an update based on the analysis of Ty3/Gypsy, Retroviridae, Ty1/Copia and Bel/Pao LTR retroelements and the Caulimoviridae pararetroviruses of plants. Among other features, in terms of the aforementioned topics, this update adds: (i) a variety of descriptions and reviews distributed in multiple web pages; (ii) protein-based phylogenies, where phylogenetic levels are assigned to distinct classified elements; (iii) a collection of multiple alignments, lineage-specific hidden Markov models and consensus sequences, called GyDB collection; (iv) updated RefSeq databases and BLAST and HMM servers to facilitate sequence characterization of new LTR retroelement and caulimovirus queries; and (v) a bibliographic server. GyDB 2.0 is available at http://gydb.org. PMID:21036865

  3. The Gypsy Database (GyDB) of mobile genetic elements: release 2.0.

    PubMed

    Llorens, Carlos; Futami, Ricardo; Covelli, Laura; Domínguez-Escribá, Laura; Viu, Jose M; Tamarit, Daniel; Aguilar-Rodríguez, Jose; Vicente-Ripolles, Miguel; Fuster, Gonzalo; Bernet, Guillermo P; Maumus, Florian; Munoz-Pomer, Alfonso; Sempere, Jose M; Latorre, Amparo; Moya, Andres

    2011-01-01

    This article introduces the second release of the Gypsy Database of Mobile Genetic Elements (GyDB 2.0): a research project devoted to the evolutionary dynamics of viruses and transposable elements based on their phylogenetic classification (per lineage and protein domain). The Gypsy Database (GyDB) is a long-term project that is continuously progressing, and that owing to the high molecular diversity of mobile elements requires to be completed in several stages. GyDB 2.0 has been powered with a wiki to allow other researchers participate in the project. The current database stage and scope are long terminal repeats (LTR) retroelements and relatives. GyDB 2.0 is an update based on the analysis of Ty3/Gypsy, Retroviridae, Ty1/Copia and Bel/Pao LTR retroelements and the Caulimoviridae pararetroviruses of plants. Among other features, in terms of the aforementioned topics, this update adds: (i) a variety of descriptions and reviews distributed in multiple web pages; (ii) protein-based phylogenies, where phylogenetic levels are assigned to distinct classified elements; (iii) a collection of multiple alignments, lineage-specific hidden Markov models and consensus sequences, called GyDB collection; (iv) updated RefSeq databases and BLAST and HMM servers to facilitate sequence characterization of new LTR retroelement and caulimovirus queries; and (v) a bibliographic server. GyDB 2.0 is available at http://gydb.org.

  4. Quantitative Measures for Evaluation of Ultrasound Therapies of the Prostate

    NASA Astrophysics Data System (ADS)

    Kobelevskiy, Ilya; Burtnyk, Mathieu; Bronskill, Michael; Chopra, Rajiv

    2010-03-01

    Development of non-invasive techniques for prostate cancer treatment requires implementation of quantitative measures for evaluation of the treatment results. In this paper. we introduce measures that estimate spatial targeting accuracy and potential thermal damage to the structures surrounding the prostate. The measures were developed for the technique of treating prostate cancer with a transurethral ultrasound heating applicators guided by active MR temperature feedback. Variations of ultrasound element length and related MR imaging parameters such as MR slice thickness and update time were investigated by performing numerical simulations of the treatment on a database of ten patient prostate geometries segmented from clinical MR images. Susceptibility of each parameter configuration to uncertainty in MR temperature measurements was studied by adding noise to the temperature measurements. Gaussian noise with zero mean and standard deviation of 0, 1, 3 and 5° C was used to model different levels of uncertainty in MR temperature measurements. Results of simulations for each parameter configuration were averaged over the database of the ten prostate patient geometries studied. Results have shown that for update time of 5 seconds both 3- and 5-mm elements achieve appropriate performance for temperature uncertainty up to 3° C, while temperature uncertainty of 5° C leads to noticeable reduction in spatial accuracy and increased risk of damaging rectal wall. Ten-mm elements lacked spatial accuracy and had higher risk of damaging rectal wall compared to 3- and 5-mm elements, but were less sensitive to the level of temperature uncertainty. The effect of changing update time was studied for 5-mm elements. Simulations showed that update time had minor effects on all aspects of treatment for temperature uncertainty of 0° C and 1° C, while temperature uncertainties of 3° C and 5° C led to reduced spatial accuracy, increased potential damage to the rectal wall, and longer treatment times for update time above 5 seconds. Overall evaluation of results suggested that 5-mm elements showed best performance under physically reachable MR imaging parameters.

  5. Hyperfine Structure and Abundances of Heavy Elements in 68 Tauri (HD 27962)

    NASA Astrophysics Data System (ADS)

    Martinet, S.; Monier, R.

    2017-12-01

    HD 27962, also known as 68 Tauri, is a Chemically Peculiar Am star member of the Hyades Open Cluster in the local arm of the Galaxy. We have modeled the high resolution SOPHIE (R=75000) spectrum of 68 Tauri using updated model atmosphere and spectrum synthesis to derive chemical abundances in its atmosphere. In particular, we have studied the effect of the inclusion of Hyperfine Structure of various Baryum isotopes on the determination of the Baryum abundance in 68 Tauri. We have also derived new abundances using updated accurate atomic parameters retrieved from the NIST database.

  6. Curvature estimation for multilayer hinged structures with initial strains

    NASA Astrophysics Data System (ADS)

    Nikishkov, G. P.

    2003-10-01

    Closed-form estimate of curvature for hinged multilayer structures with initial strains is developed. The finite element method is used for modeling of self-positioning microstructures. The geometrically nonlinear problem with large rotations and large displacements is solved using step procedure with node coordinate update. Finite element results for curvature of the hinged micromirror with variable width is compared to closed-form estimates.

  7. Structural Finite Element Model Updating Using Vibration Tests and Modal Analysis for NPL footbridge - SHM demonstrator

    NASA Astrophysics Data System (ADS)

    Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.

    2011-07-01

    This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.

  8. Proceedings of the Atmospheric Neutral Density Specialist Conference, Held in Colorado Springs, Colorado on March 22-23, 1988

    DTIC Science & Technology

    1988-03-23

    observations more often. Using this updated satellite orbital element set , a more accurate space surveillance product is generated by ensuring the time span...position were more accurate, observations could be required less frequently by the spacetrack network, the satellite orbital element set would not need to...of the orbit , one that includes the best model of atmospheric drag, will give the best, or most accurate, element set for a satellite. By maintaining

  9. Heat transfer model and finite element formulation for simulation of selective laser melting

    NASA Astrophysics Data System (ADS)

    Roy, Souvik; Juha, Mario; Shephard, Mark S.; Maniatty, Antoinette M.

    2017-10-01

    A novel approach and finite element formulation for modeling the melting, consolidation, and re-solidification process that occurs in selective laser melting additive manufacturing is presented. Two state variables are introduced to track the phase (melt/solid) and the degree of consolidation (powder/fully dense). The effect of the consolidation on the absorption of the laser energy into the material as it transforms from a porous powder to a dense melt is considered. A Lagrangian finite element formulation, which solves the governing equations on the unconsolidated reference configuration is derived, which naturally considers the effect of the changing geometry as the powder melts without needing to update the simulation domain. The finite element model is implemented into a general-purpose parallel finite element solver. Results are presented comparing to experimental results in the literature for a single laser track with good agreement. Predictions for a spiral laser pattern are also shown.

  10. Parameter identification of material constants in a composite shell structure

    NASA Technical Reports Server (NTRS)

    Martinez, David R.; Carne, Thomas G.

    1988-01-01

    One of the basic requirements in engineering analysis is the development of a mathematical model describing the system. Frequently comparisons with test data are used as a measurement of the adequacy of the model. An attempt is typically made to update or improve the model to provide a test verified analysis tool. System identification provides a systematic procedure for accomplishing this task. The terms system identification, parameter estimation, and model correlation all refer to techniques that use test information to update or verify mathematical models. The goal of system identification is to improve the correlation of model predictions with measured test data, and produce accurate, predictive models. For nonmetallic structures the modeling task is often difficult due to uncertainties in the elastic constants. A finite element model of the shell was created, which included uncertain orthotropic elastic constants. A modal survey test was then performed on the shell. The resulting modal data, along with the finite element model of the shell, were used in a Bayes estimation algorithm. This permitted the use of covariance matrices to weight the confidence in the initial parameter values as well as confidence in the measured test data. The estimation procedure also employed the concept of successive linearization to obtain an approximate solution to the original nonlinear estimation problem.

  11. Finite element model updating and damage detection for bridges using vibration measurement.

    DOT National Transportation Integrated Search

    2013-12-01

    In this report, the results of a study on developing a damage detection methodology based on Statistical Pattern Recognition are : presented. This methodology uses a new damage sensitive feature developed in this study that relies entirely on modal :...

  12. Distribution factors for construction loads and girder capacity equations [project summary].

    DOT National Transportation Integrated Search

    2017-03-01

    This project focused on the use of Florida I-beams (FIBs) in bridge construction. University of Florida researchers used analytical models and finite element analysis to update equations used in the design of bridges using FIBs. They were particularl...

  13. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  14. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    NASA Astrophysics Data System (ADS)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  15. Fast Updating National Geo-Spatial Databases with High Resolution Imagery: China's Methodology and Experience

    NASA Astrophysics Data System (ADS)

    Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.

    2014-04-01

    Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.

  16. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  17. Structural Model Tuning Capability in an Object-Oriented Multidisciplinary Design, Analysis, and Optimization Tool

    NASA Technical Reports Server (NTRS)

    Lung, Shun-fat; Pak, Chan-gi

    2008-01-01

    Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization [MDAO] tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.

  18. Finite element model updating of multi-span steel-arch-steel-girder bridges based on ambient vibrations

    NASA Astrophysics Data System (ADS)

    Hou, Tsung-Chin; Gao, Wei-Yuan; Chang, Chia-Sheng; Zhu, Guan-Rong; Su, Yu-Min

    2017-04-01

    The three-span steel-arch-steel-girder Jiaxian Bridge was newly constructed in 2010 to replace the former one that has been destroyed by Typhoon Sinlaku (2008, Taiwan). It was designed and built to continue the domestic service requirement, as well as to improve the tourism business of the Kaohsiung city government, Taiwan. This study aimed at establishing the baseline model of Jiaxian Bridge for hazardous scenario simulation such as typhoons, floods and earthquakes. Necessities of these precaution works were attributed to the inherent vulnerability of the sites: near fault and river cross. The uncalibrated baseline bridge model was built with structural finite element in accordance with the blueprints. Ambient vibration measurements were performed repeatedly to acquire the elastic dynamic characteristics of the bridge structure. Two frequency domain system identification algorithms were employed to extract the measured operational modal parameters. Modal shapes, frequencies, and modal assurance criteria (MAC) were configured as the fitting targets so as to calibrate/update the structural parameters of the baseline model. It has been recognized that different types of structural parameters contribute distinguishably to the fitting targets, as this study has similarly explored. For steel-arch-steel-girder bridges in particular this case, joint rigidity of the steel components was found to be dominant while material properties and section geometries relatively minor. The updated model was capable of providing more rational elastic responses of the bridge superstructure under normal service conditions as well as hazardous scenarios, and can be used for manage the health conditions of the bridge structure.

  19. Fatigue assessment of an existing steel bridge by finite element modelling and field measurements

    NASA Astrophysics Data System (ADS)

    Kwad, J.; Alencar, G.; Correia, J.; Jesus, A.; Calçada, R.; Kripakaran, P.

    2017-05-01

    The evaluation of fatigue life of structural details in metallic bridges is a major challenge for bridge engineers. A reliable and cost-effective approach is essential to ensure appropriate maintenance and management of these structures. Typically, local stresses predicted by a finite element model of the bridge are employed to assess the fatigue life of fatigue-prone details. This paper illustrates an approach for fatigue assessment based on measured data for a connection in an old bascule steel bridge located in Exeter (UK). A finite element model is first developed from the design information. The finite element model of the bridge is calibrated using measured responses from an ambient vibration test. The stress time histories are calculated through dynamic analysis of the updated finite element model. Stress cycles are computed through the rainflow counting algorithm, and the fatigue prone details are evaluated using the standard SN curves approach and the Miner’s rule. Results show that the proposed approach can estimate the fatigue damage of a fatigue prone detail in a structure using measured strain data.

  20. Frequency Response Function Based Damage Identification for Aerospace Structures

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph Acton

    Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.

  1. Finite element model updating using the shadow hybrid Monte Carlo technique

    NASA Astrophysics Data System (ADS)

    Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.

    2015-02-01

    Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.

  2. Ionization ratios and elemental abundances in the atmosphere of 68 Tauri

    NASA Astrophysics Data System (ADS)

    Aouina, A.; Monier, R.

    2017-12-01

    We have derived the ionization ratios of twelve elements in the atmosphere of the star 68 Tauri (HD 27962) using an ATLAS9 model atmosphere with 72 layers computed for the effective temperature and surface gravity of the star. We then computed a grid of synthetic spectra generated by SYNSPEC49 based on an ATLAS9 model atmosphere in order to model one high resolution spectrum secured by one of us (RM) with the échelle spectrograph SOPHIE at Observatoire de Haute Provence. We could determine the abundances of several elements in their dominant ionization stage, including those defining the Am phenomenon. We thus provide new abundance determinations for 68 Tauri using updated accurate atomic data retrieved from the NIST database which extend previous abundance works.

  3. Computational simulation of the creep-rupture process in filamentary composite materials

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.; Hackett, Robert M.

    1991-01-01

    A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.

  4. rSNPBase 3.0: an updated database of SNP-related regulatory elements, element-gene pairs and SNP-based gene regulatory networks.

    PubMed

    Guo, Liyuan; Wang, Jing

    2018-01-04

    Here, we present the updated rSNPBase 3.0 database (http://rsnp3.psych.ac.cn), which provides human SNP-related regulatory elements, element-gene pairs and SNP-based regulatory networks. This database is the updated version of the SNP regulatory annotation database rSNPBase and rVarBase. In comparison to the last two versions, there are both structural and data adjustments in rSNPBase 3.0: (i) The most significant new feature is the expansion of analysis scope from SNP-related regulatory elements to include regulatory element-target gene pairs (E-G pairs), therefore it can provide SNP-based gene regulatory networks. (ii) Web function was modified according to data content and a new network search module is provided in the rSNPBase 3.0 in addition to the previous regulatory SNP (rSNP) search module. The two search modules support data query for detailed information (related-elements, element-gene pairs, and other extended annotations) on specific SNPs and SNP-related graphic networks constructed by interacting transcription factors (TFs), miRNAs and genes. (3) The type of regulatory elements was modified and enriched. To our best knowledge, the updated rSNPBase 3.0 is the first data tool supports SNP functional analysis from a regulatory network prospective, it will provide both a comprehensive understanding and concrete guidance for SNP-related regulatory studies. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. An object-oriented, technology-adaptive information model

    NASA Technical Reports Server (NTRS)

    Anyiwo, Joshua C.

    1995-01-01

    The primary objective was to develop a computer information system for effectively presenting NASA's technologies to American industries, for appropriate commercialization. To this end a comprehensive information management model, applicable to a wide variety of situations, and immune to computer software/hardware technological gyrations, was developed. The model consists of four main elements: a DATA_STORE, a data PRODUCER/UPDATER_CLIENT and a data PRESENTATION_CLIENT, anchored to a central object-oriented SERVER engine. This server engine facilitates exchanges among the other model elements and safeguards the integrity of the DATA_STORE element. It is designed to support new technologies, as they become available, such as Object Linking and Embedding (OLE), on-demand audio-video data streaming with compression (such as is required for video conferencing), Worldwide Web (WWW) and other information services and browsing, fax-back data requests, presentation of information on CD-ROM, and regular in-house database management, regardless of the data model in place. The four components of this information model interact through a system of intelligent message agents which are customized to specific information exchange needs. This model is at the leading edge of modern information management models. It is independent of technological changes and can be implemented in a variety of ways to meet the specific needs of any communications situation. This summer a partial implementation of the model has been achieved. The structure of the DATA_STORE has been fully specified and successfully tested using Microsoft's FoxPro 2.6 database management system. Data PRODUCER/UPDATER and PRESENTATION architectures have been developed and also successfully implemented in FoxPro; and work has started on a full implementation of the SERVER engine. The model has also been successfully applied to a CD-ROM presentation of NASA's technologies in support of Langley Research Center's TAG efforts.

  6. Toward a descriptive model of galactic cosmic rays in the heliosphere

    NASA Technical Reports Server (NTRS)

    Mewaldt, R. A.; Cummings, A. C.; Adams, James H., Jr.; Evenson, Paul; Fillius, W.; Jokipii, J. R.; Mckibben, R. B.; Robinson, Paul A., Jr.

    1988-01-01

    Researchers review the elements that enter into phenomenological models of the composition, energy spectra, and the spatial and temporal variations of galactic cosmic rays, including the so-called anomalous cosmic ray component. Starting from an existing model, designed to describe the behavior of cosmic rays in the near-Earth environment, researchers suggest possible updates and improvements to this model, and then propose a quantitative approach for extending such a model into other regions of the heliosphere.

  7. A Review and Reappraisal of Adaptive Human-Computer Interfaces in Complex Control Systems

    DTIC Science & Technology

    2006-08-01

    maneuverability measures. The cost elements were expressed as fuzzy membership functions. Figure 9 shows the flowchart of the route planner. A fuzzy navigator...and updating of the user model, which contains information about three generic stereotypes ( beginner , intermediate and expert users) plus an

  8. Updates to the NASA Space Telecommunications Radio System (STRS) Architecture

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Handler, Louis M.; Briones, Janette; Hall, Charles S.

    2008-01-01

    This paper describes an update of the Space Telecommunications Radio System (STRS) open architecture for NASA space based radios. The STRS architecture has been defined as a framework for the design, development, operation and upgrade of space based software defined radios, where processing resources are constrained. The architecture has been updated based upon reviews by NASA missions, radio providers, and component vendors. The STRS Standard prescribes the architectural relationship between the software elements used in software execution and defines the Application Programmer Interface (API) between the operating environment and the waveform application. Modeling tools have been adopted to present the architecture. The paper will present a description of the updated API, configuration files, and constraints. Minimum compliance is discussed for early implementations. The paper then closes with a summary of the changes made and discussion of the relevant alignment with the Object Management Group (OMG) SWRadio specification, and enhancements to the specialized signal processing abstraction.

  9. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  10. rSNPBase 3.0: an updated database of SNP-related regulatory elements, element-gene pairs and SNP-based gene regulatory networks

    PubMed Central

    2018-01-01

    Abstract Here, we present the updated rSNPBase 3.0 database (http://rsnp3.psych.ac.cn), which provides human SNP-related regulatory elements, element-gene pairs and SNP-based regulatory networks. This database is the updated version of the SNP regulatory annotation database rSNPBase and rVarBase. In comparison to the last two versions, there are both structural and data adjustments in rSNPBase 3.0: (i) The most significant new feature is the expansion of analysis scope from SNP-related regulatory elements to include regulatory element–target gene pairs (E–G pairs), therefore it can provide SNP-based gene regulatory networks. (ii) Web function was modified according to data content and a new network search module is provided in the rSNPBase 3.0 in addition to the previous regulatory SNP (rSNP) search module. The two search modules support data query for detailed information (related-elements, element-gene pairs, and other extended annotations) on specific SNPs and SNP-related graphic networks constructed by interacting transcription factors (TFs), miRNAs and genes. (3) The type of regulatory elements was modified and enriched. To our best knowledge, the updated rSNPBase 3.0 is the first data tool supports SNP functional analysis from a regulatory network prospective, it will provide both a comprehensive understanding and concrete guidance for SNP-related regulatory studies. PMID:29140525

  11. Modeling and dynamic environment analysis technology for spacecraft

    NASA Astrophysics Data System (ADS)

    Fang, Ren; Zhaohong, Qin; Zhong, Zhang; Zhenhao, Liu; Kai, Yuan; Long, Wei

    Spacecraft sustains complex and severe vibrations and acoustic environments during flight. Predicting the resulting structures, including numerical predictions of fluctuating pressure, updating models and random vibration and acoustic analysis, plays an important role during the design, manufacture and ground testing of spacecraft. In this paper, Monotony Integrative Large Eddy Simulation (MILES) is introduced to predict the fluctuating pressure of the fairing. The exact flow structures of the fairing wall surface under different Mach numbers are obtained, then a spacecraft model is constructed using the finite element method (FEM). According to the modal test data, the model is updated by the penalty method. On this basis, the random vibration and acoustic responses of the fairing and satellite are analyzed by different methods. The simulated results agree well with the experimental ones, which shows the validity of the modeling and dynamic environment analysis technology. This information can better support test planning, defining test conditions and designing optimal structures.

  12. On-line Bayesian model updating for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo

    2018-03-01

    Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.

  13. Identification of Historical Veziragasi Aqueduct Using the Operational Modal Analysis

    PubMed Central

    Ercan, E.; Nuhoglu, A.

    2014-01-01

    This paper describes the results of a model updating study conducted on a historical aqueduct, called Veziragasi, in Turkey. The output-only modal identification results obtained from ambient vibration measurements of the structure were used to update a finite element model of the structure. For the purposes of developing a solid model of the structure, the dimensions of the structure, defects, and material degradations in the structure were determined in detail by making a measurement survey. For evaluation of the material properties of the structure, nondestructive and destructive testing methods were applied. The modal analysis of the structure was calculated by FEM. Then, a nondestructive dynamic test as well as operational modal analysis was carried out and dynamic properties were extracted. The natural frequencies and corresponding mode shapes were determined from both theoretical and experimental modal analyses and compared with each other. A good harmony was attained between mode shapes, but there were some differences between natural frequencies. The sources of the differences were introduced and the FEM model was updated by changing material parameters and boundary conditions. Finally, the real analytical model of the aqueduct was put forward and the results were discussed. PMID:24511287

  14. Carbonatite and alkaline intrusion-related rare earth element deposits–A deposit model

    USGS Publications Warehouse

    Verplanck, Philip L.; Van Gosen, Bradley S.

    2011-01-01

    The rare earth elements are not as rare in nature as their name implies, but economic deposits with these elements are not common and few deposits have been large producers. In the past 25 years, demand for rare earth elements has increased dramatically because of their wide and diverse use in high-technology applications. Yet, presently the global production and supply of rare earth elements come from only a few sources. China produces more than 95 percent of the world's supply of rare earth elements. Because of China's decision to restrict exports of these elements, the price of rare earth elements has increased and industrial countries are concerned about supply shortages. As a result, understanding the distribution and origin of rare earth elements deposits, and identifying and quantifying our nation's rare earth elements resources have become priorities. Carbonatite and alkaline intrusive complexes, as well as their weathering products, are the primary sources of rare earth elements. The general mineral deposit model summarized here is part of an effort by the U.S. Geological Survey's Mineral Resources Program to update existing models and develop new descriptive mineral deposit models to supplement previously published models for use in mineral-resource and mineral-environmental assessments. Carbonatite and alkaline intrusion-related REE deposits are discussed together because of their spatial association, common enrichment in incompatible elements, and similarities in genesis. A wide variety of commodities have been exploited from carbonatites and alkaline igneous rocks, such as rare earth elements, niobium, phosphate, titanium, vermiculite, barite, fluorite, copper, calcite, and zirconium. Other enrichments include manganese, strontium, tantalum, thorium, vanadium, and uranium.

  15. Using experimental modal analysis to assess the behaviour of timber elements

    NASA Astrophysics Data System (ADS)

    Kouroussis, Georges; Fekih, Lassaad Ben; Descamps, Thierry

    2018-03-01

    Timber frameworks are one of the most important and widespread types of structures. Their configurations and joints are usually complex and require a high level of craftsmanship to assemble. In the field of restoration, a good understanding of the structural behaviour is necessary and is often based on assessment techniques dedicated to wood characterisation. This paper presents the use of experimental modal analysis for finite element updating. To do this, several timber beams in a free supported condition were analysed in order to extract their bending natural characteristics (frequency, damping and mode shapes). Corresponding ABAQUS finite element models were derived which included the effects of local defects (holes, cracks and wood nodes), moisture and structural decay. To achieve the modal updating, additional simulations were performed in order to study the sensitivity of the mechanical parameters. With the intent to estimate their mechanical properties, a procedure of modal updating was carried out in MatLab with a Python script. This was created to extract the modal information from the ABAQUS modal analysis results to be compared with the experimental results. The updating was based on a minimum of unconstrained multivariable function using a derivative-free method. The objective function was selected from the conventional comparison tools (absolute or relative frequency difference, and/or modal assurance criterion). This testing technique was used to determine the dynamic mechanical properties of timber beams, such as the anisotropic Young's Moduli and damping ratio. To verify the modulus, a series of static 4-point bending tests and STS04 classifications were conducted. The results also revealed that local defects have a negligible influence on natural frequencies. The results demonstrate that this assessment tool offers an effective method to obtain the mechanical properties of timber elements, especially when on-site and non-destructive techniques are needed, for example when retrofitting an existing structure.

  16. Finite element method for viscoelastic medium with damage and the application to structural analysis of solid rocket motor grain

    NASA Astrophysics Data System (ADS)

    Deng, Bin; Shen, ZhiBin; Duan, JingBo; Tang, GuoJin

    2014-05-01

    This paper studies the damage-viscoelastic behavior of composite solid propellants of solid rocket motors (SRM). Based on viscoelastic theories and strain equivalent hypothesis in damage mechanics, a three-dimensional (3-D) nonlinear viscoelastic constitutive model incorporating with damage is developed. The resulting viscoelastic constitutive equations are numerically discretized by integration algorithm, and a stress-updating method is presented by solving nonlinear equations according to the Newton-Raphson method. A material subroutine of stress-updating is made up and embedded into commercial code of Abaqus. The material subroutine is validated through typical examples. Our results indicate that the finite element results are in good agreement with the analytical ones and have high accuracy, and the suggested method and designed subroutine are efficient and can be further applied to damage-coupling structural analysis of practical SRM grain.

  17. Fluids and Combustion Facility: Fluids Integrated Rack Modal Model Correlation

    NASA Technical Reports Server (NTRS)

    McNelis, Mark E.; Suarez, Vicente J.; Sullivan, Timothy L.; Otten, Kim D.; Akers, James C.

    2005-01-01

    The Fluids Integrated Rack (FIR) is one of two racks in the Fluids and Combustion Facility on the International Space Station. The FIR is dedicated to the scientific investigation of space system fluids management supporting NASA s Exploration of Space Initiative. The FIR hardware was modal tested and FIR finite element model updated to satisfy the International Space Station model correlation criteria. The final cross-orthogonality results between the correlated model and test mode shapes was greater than 90 percent for all primary target modes.

  18. Metal-rich, Metal-poor: Updated Stellar Population Models for Old Stellar Systems

    NASA Astrophysics Data System (ADS)

    Conroy, Charlie; Villaume, Alexa; van Dokkum, Pieter G.; Lind, Karin

    2018-02-01

    We present updated stellar population models appropriate for old ages (>1 Gyr) and covering a wide range in metallicities (‑1.5 ≲ [Fe/H] ≲ 0.3). These models predict the full spectral variation associated with individual element abundance variation as a function of metallicity and age. The models span the optical–NIR wavelength range (0.37–2.4 μm), include a range of initial mass functions, and contain the flexibility to vary 18 individual elements including C, N, O, Mg, Si, Ca, Ti, and Fe. To test the fidelity of the models, we fit them to integrated light optical spectra of 41 Galactic globular clusters (GCs). The value of testing models against GCs is that their ages, metallicities, and detailed abundance patterns have been derived from the Hertzsprung–Russell diagram in combination with high-resolution spectroscopy of individual stars. We determine stellar population parameters from fits to all wavelengths simultaneously (“full spectrum fitting”), and demonstrate explicitly with mock tests that this approach produces smaller uncertainties at fixed signal-to-noise ratio than fitting a standard set of 14 line indices. Comparison of our integrated-light results to literature values reveals good agreement in metallicity, [Fe/H]. When restricting to GCs without prominent blue horizontal branch populations, we also find good agreement with literature values for ages, [Mg/Fe], [Si/Fe], and [Ti/Fe].

  19. System-Level Heat Transfer Analysis, Thermal- Mechanical Cyclic Stress Analysis, and Environmental Fatigue Modeling of a Two-Loop Pressurized Water Reactor. A Preliminary Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Subhasish; Soppet, William; Majumdar, Saurin

    This report provides an update on an assessment of environmentally assisted fatigue for light water reactor components under extended service conditions. This report is a deliverable in April 2015 under the work package for environmentally assisted fatigue under DOE's Light Water Reactor Sustainability program. In this report, updates are discussed related to a system level preliminary finite element model of a two-loop pressurized water reactor (PWR). Based on this model, system-level heat transfer analysis and subsequent thermal-mechanical stress analysis were performed for typical design-basis thermal-mechanical fatigue cycles. The in-air fatigue lives of components, such as the hot and cold legs,more » were estimated on the basis of stress analysis results, ASME in-air fatigue life estimation criteria, and fatigue design curves. Furthermore, environmental correction factors and associated PWR environment fatigue lives for the hot and cold legs were estimated by using estimated stress and strain histories and the approach described in NUREG-6909. The discussed models and results are very preliminary. Further advancement of the discussed model is required for more accurate life prediction of reactor components. This report only presents the work related to finite element modelling activities. However, in between multiple tensile and fatigue tests were conducted. The related experimental results will be presented in the year-end report.« less

  20. Updated atomic weights: Time to review our table

    USGS Publications Warehouse

    Coplen, Tyler B.; Meyers, Fabienne; Holden, Norman E.

    2016-01-01

    Despite common belief, atomic weights are not necessarily constants of nature. Scientists’ ability to measure these values is regularly improving, so one would expect that the accuracy of these values should be improving with time. It is the task of the IUPAC (International Union of Pure and Applied Chemistry) Commission on Isotopic Abundances and Atomic Weights (CIAAW) to regularly review atomic-weight determinations and release updated values.According to an evaluation published in Pure and Applied Chemistry [1], even the most simplified table abridged to four significant digits needs to be updated for the elements selenium and molybdenum. According to the most recent 2015 release of "Atomic Weights of the Elements" [2], another update is needed for ytterbium.

  1. An improved design method of a tuned mass damper for an in-service footbridge

    NASA Astrophysics Data System (ADS)

    Shi, Weixing; Wang, Liangkun; Lu, Zheng

    2018-03-01

    Tuned mass damper (TMD) has a wide range of applications in the vibration control of footbridges. However, the traditional engineering design method may lead to a mistuned TMD. In this paper, an improved TMD design method based on the model updating is proposed. Firstly, the original finite element model (FEM) is studied and the natural characteristics of the in-service or newly built footbridge is identified by field test, and then the original FEM is updated. TMD is designed according to the new updated FEM, and it is optimized according to the simulation on vibration control effects. Finally, the installation and field measurement of TMD are carried out. The improved design method can be applied to both in-service and newly built footbridges. This paper illustrates the improved design method with an engineering example. The frequency identification results of field test and original FEM show that there is a relatively large difference between them. The TMD designed according to the updated FEM has better vibration control effect than the TMD designed according to the original FEM. The site test results show that TMD has good effect on controlling human-induced vibrations.

  2. Local mesh adaptation technique for front tracking problems

    NASA Astrophysics Data System (ADS)

    Lock, N.; Jaeger, M.; Medale, M.; Occelli, R.

    1998-09-01

    A numerical model is developed for the simulation of moving interfaces in viscous incompressible flows. The model is based on the finite element method with a pseudo-concentration technique to track the front. Since a Eulerian approach is chosen, the interface is advected by the flow through a fixed mesh. Therefore, material discontinuity across the interface cannot be described accurately. To remedy this problem, the model has been supplemented with a local mesh adaptation technique. This latter consists in updating the mesh at each time step to the interface position, such that element boundaries lie along the front. It has been implemented for unstructured triangular finite element meshes. The outcome of this technique is that it allows an accurate treatment of material discontinuity across the interface and, if necessary, a modelling of interface phenomena such as surface tension by using specific boundary elements. For illustration, two examples are computed and presented in this paper: the broken dam problem and the Rayleigh-Taylor instability. Good agreement has been obtained in the comparison of the numerical results with theory or available experimental data.

  3. 3-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements and direct solvers parallelized on symmetric multiprocessor computers - Part II: direct data-space inverse solution

    NASA Astrophysics Data System (ADS)

    Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.

    2016-01-01

    Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.

  4. General Nonlinear Ferroelectric Model v. Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Wen; Robbins, Josh

    2017-03-14

    The purpose of this software is to function as a generalized ferroelectric material model. The material model is designed to work with existing finite element packages by providing updated information on material properties that are nonlinear and dependent on loading history. The two major nonlinear phenomena this model captures are domain-switching and phase transformation. The software itself does not contain potentially sensitive material information and instead provides a framework for different physical phenomena observed within ferroelectric materials. The model is calibrated to a specific ferroelectric material through input parameters provided by the user.

  5. An object-oriented framework for distributed hydrologic and geomorphic modeling using triangulated irregular networks

    NASA Astrophysics Data System (ADS)

    Tucker, Gregory E.; Lancaster, Stephen T.; Gasparini, Nicole M.; Bras, Rafael L.; Rybarczyk, Scott M.

    2001-10-01

    We describe a new set of data structures and algorithms for dynamic terrain modeling using a triangulated irregular network (TINs). The framework provides an efficient method for storing, accessing, and updating a Delaunay triangulation and its associated Voronoi diagram. The basic data structure consists of three interconnected data objects: triangles, nodes, and directed edges. Encapsulating each of these geometric elements within a data object makes it possible to essentially decouple the TIN representation from the modeling applications that make use of it. Both the triangulation and its corresponding Voronoi diagram can be rapidly retrieved or updated, making these methods well suited to adaptive remeshing schemes. We develop a set of algorithms for defining drainage networks and identifying closed depressions (e.g., lakes) for hydrologic and geomorphic modeling applications. We also outline simple numerical algorithms for solving network routing and 2D transport equations within the TIN framework. The methods are illustrated with two example applications, a landscape evolution model and a distributed rainfall-runoff model.

  6. BSM Kaon Mixing at the Physical Point

    NASA Astrophysics Data System (ADS)

    Boyle, Peter; Garron, Nicolas; Kettle, Julia; Khamseh, Ava; Tsang, Justus Tobias

    2018-03-01

    We present a progress update on the RBC-UKQCD calculation of beyond the standard model (BSM) kaon mixing matrix elements at the physical point. Simulations are performed using 2+1 flavour domain wall lattice QCD with the Iwasaki gauge action at 3 lattice spacings and with pion masses ranging from 430 MeV to the physical pion mass.

  7. A combined experimental and finite element approach to analyse the fretting mechanism of the head-stem taper junction in total hip replacement.

    PubMed

    Bitter, Thom; Khan, Imran; Marriott, Tim; Lovelady, Elaine; Verdonschot, Nico; Janssen, Dennis

    2017-09-01

    Fretting corrosion at the taper interface of modular hip implants has been implicated as a possible cause of implant failure. This study was set up to gain more insight in the taper mechanics that lead to fretting corrosion. The objectives of this study therefore were (1) to select experimental loading conditions to reproduce clinically relevant fretting corrosion features observed in retrieved components, (2) to develop a finite element model consistent with the fretting experiments and (3) to apply more complicated loading conditions of activities of daily living to the finite element model to study the taper mechanics. The experiments showed similar wear patterns on the taper surface as observed in retrievals. The finite element wear score based on Archard's law did not correlate well with the amount of material loss measured in the experiments. However, similar patterns were observed between the simulated micromotions and the experimental wear measurements. Although the finite element model could not be validated, the loading conditions based on activities of daily living demonstrate the importance of assembly load on the wear potential. These findings suggest that finite element models that do not incorporate geometry updates to account for wear loss may not be appropriate to predict wear volumes of taper connections.

  8. Atomic weights of the elements 1999

    USGS Publications Warehouse

    Coplen, T.B.

    2001-01-01

    The biennial review of atomic-weight, Ar(E), determinations and other cognate data have resulted in changes for the standard atomic weights of the following elements: Presented are updated tables of the standard atomic weights and their uncertainties estimated by combining experimental uncertainties and terrestrial variabilities. In addition, this report again contains an updated table of relative atomic-mass values and half-lives of selected radioisotopes. Changes in the evaluated isotopic abundance values from those published in 1997 are so minor that an updated list will not be published for the year 1999. Many elements have a different isotopic composition in some nonterrestrial materials. Some recent data on parent nuclides that might affect isotopic abundances or atomic-weight values are included in this report for the information of the interested scientific community.

  9. Towards mechanism-based simulation of impact damage using exascale computing

    NASA Astrophysics Data System (ADS)

    Shterenlikht, Anton; Margetts, Lee; McDonald, Samuel; Bourne, Neil K.

    2017-01-01

    Over the past 60 years, the finite element method has been very successful in modelling deformation in engineering structures. However the method requires the definition of constitutive models that represent the response of the material to applied loads. There are two issues. Firstly, the models are often difficult to define. Secondly, there is often no physical connection between the models and the mechanisms that accommodate deformation. In this paper, we present a potentially disruptive two-level strategy which couples the finite element method at the macroscale with cellular automata at the mesoscale. The cellular automata are used to simulate mechanisms, such as crack propagation. The stress-strain relationship emerges as a continuum mechanics scale interpretation of changes at the micro- and meso-scales. Iterative two-way updating between the cellular automata and finite elements drives the simulation forward as the material undergoes progressive damage at high strain rates. The strategy is particularly attractive on large-scale computing platforms as both methods scale well on tens of thousands of CPUs.

  10. Temperature Dependent Modal Test/Analysis Correlation of X-34 Fastrac Composite Rocket Nozzle

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Brunty, Joseph A. (Technical Monitor)

    2001-01-01

    A unique high temperature modal test and model correlation/update program has been performed on the composite nozzle of the FASTRAC engine for the NASA X-34 Reusable Launch Vehicle. The program was required to provide an accurate high temperature model of the nozzle for incorporation into the engine system structural dynamics model for loads calculation; this model is significantly different from the ambient case due to the large decrease in composite stiffness properties due to heating. The high-temperature modal test was performed during a hot-fire test of the nozzle. Previously, a series of high fidelity modal tests and finite element model correlation of the nozzle in a free-free configuration had been performed. This model was then attached to a modal-test verified model of the engine hot-fire test stand and the ambient system mode shapes were identified. A reduced set of accelerometers was then attached to the nozzle, the engine fired full-duration, and the frequency peaks corresponding to the ambient nozzle modes individually isolated and tracked as they decreased during the test. To update the finite-element model of the nozzle to these frequency curves, the percentage differences of the anisotropic composite moduli due to temperature variation from ambient, which had been used in the initial modeling and which were obtained by small sample coupon testing, were multiplied by an iteratively determined constant factor. These new properties were used to create high-temperature nozzle models corresponding to 10 second engine operation increments and tied into the engine system model for loads determination.

  11. Nonlinear transient analysis by energy minimization: A theoretical basis for the ACTION computer code. [predicting the response of a lightweight aircraft during a crash

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1980-01-01

    The formulation basis for establishing the static or dynamic equilibrium configurations of finite element models of structures which may behave in the nonlinear range are provided. With both geometric and time independent material nonlinearities included, the development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. Representations of a rigid link and an impenetrable contact plane are added to the deformation model so that any number of nodes of the finite element model may be connected by a rigid link or may contact the plane. Equilibrium configurations are derived as the stationary conditions of a potential function of the generalized nodal variables of the model. Minimization of the nonlinear potential function is achieved by using the best current variable metric update formula for use in unconstrained minimization. Powell's conjugate gradient algorithm, which offers very low storage requirements at some slight increase in the total number of calculations, is the other alternative algorithm to be used for extremely large scale problems.

  12. Neural adaptive control for vibration suppression in composite fin-tip of aircraft.

    PubMed

    Suresh, S; Kannan, N; Sundararajan, N; Saratchandran, P

    2008-06-01

    In this paper, we present a neural adaptive control scheme for active vibration suppression of a composite aircraft fin tip. The mathematical model of a composite aircraft fin tip is derived using the finite element approach. The finite element model is updated experimentally to reflect the natural frequencies and mode shapes very accurately. Piezo-electric actuators and sensors are placed at optimal locations such that the vibration suppression is a maximum. Model-reference direct adaptive neural network control scheme is proposed to force the vibration level within the minimum acceptable limit. In this scheme, Gaussian neural network with linear filters is used to approximate the inverse dynamics of the system and the parameters of the neural controller are estimated using Lyapunov based update law. In order to reduce the computational burden, which is critical for real-time applications, the number of hidden neurons is also estimated in the proposed scheme. The global asymptotic stability of the overall system is ensured using the principles of Lyapunov approach. Simulation studies are carried-out using sinusoidal force functions of varying frequency. Experimental results show that the proposed neural adaptive control scheme is capable of providing significant vibration suppression in the multiple bending modes of interest. The performance of the proposed scheme is better than the H(infinity) control scheme.

  13. Vaporization and Zonal Mixing in Performance Modeling of Advanced LOX-Methane Rockets

    NASA Technical Reports Server (NTRS)

    Williams, George J., Jr.; Stiegemeier, Benjamin R.

    2013-01-01

    Initial modeling of LOX-Methane reaction control (RCE) 100 lbf thrusters and larger, 5500 lbf thrusters with the TDK/VIPER code has shown good agreement with sea-level and altitude test data. However, the vaporization and zonal mixing upstream of the compressible flow stage of the models leveraged empirical trends to match the sea-level data. This was necessary in part because the codes are designed primarily to handle the compressible part of the flow (i.e. contraction through expansion) and in part because there was limited data on the thrusters themselves on which to base a rigorous model. A more rigorous model has been developed which includes detailed vaporization trends based on element type and geometry, radial variations in mixture ratio within each of the "zones" associated with elements and not just between zones of different element types, and, to the extent possible, updated kinetic rates. The Spray Combustion Analysis Program (SCAP) was leveraged to support assumptions in the vaporization trends. Data of both thrusters is revisited and the model maintains a good predictive capability while addressing some of the major limitations of the previous version.

  14. Graphical user interface for intraoperative neuroimage updating

    NASA Astrophysics Data System (ADS)

    Rick, Kyle R.; Hartov, Alex; Roberts, David W.; Lunn, Karen E.; Sun, Hai; Paulsen, Keith D.

    2003-05-01

    Image-guided neurosurgery typically relies on preoperative imaging information that is subject to errors resulting from brain shift and deformation in the OR. A graphical user interface (GUI) has been developed to facilitate the flow of data from OR to image volume in order to provide the neurosurgeon with updated views concurrent with surgery. Upon acquisition of registration data for patient position in the OR (using fiducial markers), the Matlab GUI displays ultrasound image overlays on patient specific, preoperative MR images. Registration matrices are also applied to patient-specific anatomical models used for image updating. After displaying the re-oriented brain model in OR coordinates and digitizing the edge of the craniotomy, gravitational sagging of the brain is simulated using the finite element method. Based on this model, interpolation to the resolution of the preoperative images is performed and re-displayed to the surgeon during the procedure. These steps were completed within reasonable time limits and the interface was relatively easy to use after a brief training period. The techniques described have been developed and used retrospectively prior to this study. Based on the work described here, these steps can now be accomplished in the operating room and provide near real-time feedback to the surgeon.

  15. Ground Vibration Test of the Aerostructure Test Wing 2

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia; Moholt, Matthew

    2009-01-01

    The Aerostructures Test Wing (ATW) was developed to test unique concepts for flutter prediction and control synthesis. A follow-on to the successful ATW, denoted ATW2, was fabricated as a test bed to validate a variety of instrumentation in flight and to collect data for development of advanced signal processing algorithms for flutter prediction and aviation safety. As a means to estimate flutter speed, a ground vibration test (GVT) was performed. The results of a GVT are typically utilized to update structural dynamics finite element (FE) models used for flutter analysis. In this study, two GVT methodologies were explored to determine which nodes provide the best sensor locations: (i) effective independence and (ii) kinetic energy sorting algorithms. For measurement, ten and twenty sensors were used for three and 10 target test modes. A total of six accelerometer configurations measured frequencies and mode shapes. This included locations used in the original ATW GVT. Moreover, an optical measurement system was used to acquire data without mass effects added by conventional sensors. A considerable frequency shift was observed in comparing the data from the accelerometers to the optical data. The optical data provided robust data for use of the ATW2 finite element model update.

  16. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    PubMed

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  17. An advanced constitutive model in the sheet metal forming simulation: the Teodosiu microstructural model and the Cazacu Barlat yield criterion

    NASA Astrophysics Data System (ADS)

    Alves, J. L.; Oliveira, M. C.; Menezes, L. F.

    2004-06-01

    Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Chanyoung; Kim, Nam H.

    Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less

  19. WebbPSF: Updated PSF Models Based on JWST Ground Testing Results

    NASA Astrophysics Data System (ADS)

    Osborne, Shannon; Perrin, Marshall D.; Melendez Hernandez, Marcio

    2018-06-01

    WebbPSF is a widely-used package that allows astronomers to create simulated point spread functions (PSFs) for the James Webb Space Telescope (JWST). WebbPSF provides the user with the flexibility to produce PSFs for direct imaging and coronographic modes, for a range of filters and masks, and across all the JWST instruments. These PSFs can then be analyzed with built-in evaluation tools or can be output to be used with users’ own tools. In the most recent round of updates, the accuracy of the PSFs have been improved with updated analyses of the instrument test data from NASA Goddard and with the new data from the testing of the combined Optical Telescope Element and Integrated Science Instrument Module (OTIS) at NASA Johnson. A post-processing function applying detector effects and pupil distortions to input PSFs has also been added to the WebbPSF package.

  20. Age differences in working memory updating: the role of interference, focus switching and substituting information.

    PubMed

    Lendínez, Cristina; Pelegrina, Santiago; Lechuga, M Teresa

    2015-05-01

    Working memory updating (WMU) tasks require different elements in working memory (WM) to be maintained simultaneously, accessing one of these elements, and substituting its content. This study examined possible developmental changes from childhood to adulthood both in focus switching and substituting information in WM. In addition, possible age-related changes in interference due to representational overlap between the different elements simultaneously held in these tasks were examined. Children (8- and 11-year-olds), adolescents (14-year-olds) and younger adults (mean age=22 years) were administered a numerical updating memory task, in which updating and focus switching were manipulated. As expected, response times decreased and recall performance increased with age. More importantly, the time needed for focus switching was longer in children than in adolescents and younger adults. On the other hand, substitution of information and interference due to representational overlap were not affected by age. These results suggest that age-related changes in focus switching might mediate developmental changes in WMU performance. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Dynamics of a 4x6-Meter Thin Film Elliptical Inflated Membrane for Space Applications

    NASA Technical Reports Server (NTRS)

    Casiano, Matthew J.; Hamidzadeh, Hamid R.; Tinker, Michael L.; McConnaughey, Paul R. (Technical Monitor)

    2002-01-01

    Dynamic characterization of a thin film inflatable elliptical structure is described in detail. A two-step finite element modeling approach in MSC/NASTRAN is utilized, consisting of (1) a nonlinear static pressurization procedure used to obtain the updated stiffness matrix, and (2) a modal "restart" eigen solution that uses the modified stiffness matrix. Unique problems encountered in modeling of this large Hexameter lightweight inflatable arc identified, including considerable difficulty in obtaining convergence in the nonlinear finite element pressurization solution. It was found that the extremely thin polyimide film material (.001 in or 1 mil) presents tremendous problems in obtaining a converged solution when internal pressure loading is applied. Approaches utilized to overcome these difficulties are described. Comparison of finite element predictions for frequency and mode shapes of the inflated structure with closed-form solutions for a flat pre-tensioned membrane indicate reasonable agreement.

  2. Space Vehicle Terrestrial Environment Design Requirements Guidelines

    NASA Technical Reports Server (NTRS)

    Johnson, Dale L.; Keller, Vernon W.; Vaughan, William W.

    2006-01-01

    The terrestrial environment is an important driver of space vehicle structural, control, and thermal system design. NASA is currently in the process of producing an update to an earlier Terrestrial Environment Guidelines for Aerospace Vehicle Design and Development Handbook. This paper addresses the contents of this updated handbook, with special emphasis on new material being included in the areas of atmospheric thermodynamic models, wind dynamics, atmospheric composition, atmospheric electricity, cloud phenomena, atmospheric extremes, and sea state. In addition, the respective engineering design elements are discussed relative to terrestrial environment inputs that require consideration. Specific lessons learned that have contributed to the advancements made in the application and awareness of terrestrial environment inputs for aerospace engineering applications are presented.

  3. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multifactor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  4. Probabilistic Methods for Structural Reliability and Risk

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A probabilistic method is used to evaluate the structural reliability and risk of select metallic and composite structures. The method is a multiscale, multifunctional and it is based on the most elemental level. A multi-factor interaction model is used to describe the material properties which are subsequently evaluated probabilistically. The metallic structure is a two rotor aircraft engine, while the composite structures consist of laminated plies (multiscale) and the properties of each ply are the multifunctional representation. The structural component is modeled by finite element. The solution method for structural responses is obtained by an updated simulation scheme. The results show that the risk for the two rotor engine is about 0.0001 and the composite built-up structure is also 0.0001.

  5. a Bottom-Up Geosptial Data Update Mechanism for Spatial Data Infrastructure Updating

    NASA Astrophysics Data System (ADS)

    Tian, W.; Zhu, X.; Liu, Y.

    2012-08-01

    Currently, the top-down spatial data update mechanism has made a big progress and it is wildly applied in many SDI (spatial data infrastructure). However, this mechanism still has some issues. For example, the update schedule is limited by the professional department's project, usually which is too long for the end-user; the data form collection to public cost too much time and energy for professional department; the details of geospatial information does not provide sufficient attribute, etc. Thus, how to deal with the problems has become the effective shortcut. Emerging Internet technology, 3S technique and geographic information knowledge which is popular in the public promote the booming development of geoscience in volunteered geospatial information. Volunteered geospatial information is the current "hotspot", which attracts many researchers to study its data quality and credibility, accuracy, sustainability, social benefit, application and so on. In addition to this, a few scholars also pay attention to the value of VGI to support the SDI updating. And on that basis, this paper presents a bottom-up update mechanism form VGI to SDI, which includes the processes of match homonymous elements between VGI and SDI vector data , change data detection, SDI spatial database update and new data product publication to end-users. Then, the proposed updating cycle is deeply discussed about the feasibility of which can detect the changed elements in time and shorten the update period, provide more accurate geometry and attribute data for spatial data infrastructure and support update propagation.

  6. Report of the IAU Working Group on cartographic coordinates and rotational elements: 2009

    USGS Publications Warehouse

    Archinal, B.A.; A'Hearn, M.F.; Bowell, E.; Conrad, A.; Consolmagno, G.J.; Courtin, R.; Fukushima, T.; Hestroffer, D.; Hilton, J.L.; Krasinsky, G.A.; Neumann, G.; Oberst, J.; Seidelmann, P.K.; Stooke, P.; Tholen, D.J.; Thomas, P.C.; Williams, I.P.

    2010-01-01

    Every three years the IAU Working Group on Cartographic Coordinates and Rotational Elements revises tables giving the directions of the poles of rotation and the prime meridians of the planets, satellites, minor planets, and comets. This report takes into account the IAU Working Group for Planetary System Nomenclature (WGPSN) and the IAU Committee on Small Body Nomenclature (CSBN) definition of dwarf planets, introduces improved values for the pole and rotation rate of Mercury, returns the rotation rate of Jupiter to a previous value, introduces improved values for the rotation of five satellites of Saturn, and adds the equatorial radius of the Sun for comparison. It also adds or updates size and shape information for the Earth, Mars’ satellites Deimos and Phobos, the four Galilean satellites of Jupiter, and 22 satellites of Saturn. Pole, rotation, and size information has been added for the asteroids (21) Lutetia, (511) Davida, and (2867) Šteins. Pole and rotation information has been added for (2) Pallas and (21) Lutetia. Pole and rotation and mean radius information has been added for (1) Ceres. Pole information has been updated for (4) Vesta. The high precision realization for the pole and rotation rate of the Moon is updated. Alternative orientation models for Mars, Jupiter, and Saturn are noted. The Working Group also reaffirms that once an observable feature at a defined longitude is chosen, a longitude definition origin should not change except under unusual circumstances. It is also noted that alternative coordinate systems may exist for various (e.g. dynamical) purposes, but specific cartographic coordinate system information continues to be recommended for each body. The Working Group elaborates on its purpose, and also announces its plans to occasionally provide limited updates to its recommendations via its website, in order to address community needs for some updates more often than every 3 years. Brief recommendations are also made to the general planetary community regarding the need for controlled products, and improved or consensus rotation models for Mars, Jupiter, and Saturn.

  7. Report of the IAU Working Group on cartographic coordinates and rotational elements: 2009

    USGS Publications Warehouse

    Archinal, Brent A.; A’Hearn, Michael F.; Bowell, Edward; Conrad, Al; Consolmagno, Guy J.; Courtin, Regis; Fukushima, Toshio; Hestroffer, Daniel; Hilton, James L.; Krasinsky, Georgij A.; Neumann, Gregory; Oberst, Jurgen; Seidelmann, P. Kenneth; Stooke, Philip; Tholen, David J.; Thomas, Peter C.; Williams, Iwan P.

    2010-01-01

    Every three years the IAU Working Group on Cartographic Coordinates and Rotational Elements revises tables giving the directions of the poles of rotation and the prime meridians of the planets, satellites, minor planets, and comets. This report takes into account the IAU Working Group for Planetary System Nomenclature (WGPSN) and the IAU Committee on Small Body Nomenclature (CSBN) definition of dwarf planets, introduces improved values for the pole and rotation rate of Mercury, returns the rotation rate of Jupiter to a previous value, introduces improved values for the rotation of five satellites of Saturn, and adds the equatorial radius of the Sun for comparison. It also adds or updates size and shape information for the Earth, Mars’ satellites Deimos and Phobos, the four Galilean satellites of Jupiter, and 22 satellites of Saturn. Pole, rotation, and size information has been added for the asteroids (21) Lutetia, (511) Davida, and (2867) Šteins. Pole and rotation information has been added for (2) Pallas and (21) Lutetia. Pole and rotation and mean radius information has been added for (1) Ceres. Pole information has been updated for (4) Vesta. The high precision realization for the pole and rotation rate of the Moon is updated. Alternative orientation models for Mars, Jupiter, and Saturn are noted. The Working Group also reaffirms that once an observable feature at a defined longitude is chosen, a longitude definition origin should not change except under unusual circumstances. It is also noted that alternative coordinate systems may exist for various (e.g. dynamical) purposes, but specific cartographic coordinate system information continues to be recommended for each body. The Working Group elaborates on its purpose, and also announces its plans to occasionally provide limited updates to its recommendations via its website, in order to address community needs for some updates more often than every 3 years. Brief recommendations are also made to the general planetary community regarding the need for controlled products, and improved or consensus rotation models for Mars, Jupiter, and Saturn.

  8. Finite Element Modeling of the NASA Langley Aluminum Testbed Cylinder

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Pritchard, Joselyn I.; Buehrle, Ralph D.; Pappa, Richard S.

    2002-01-01

    The NASA Langley Aluminum Testbed Cylinder (ATC) was designed to serve as a universal structure for evaluating structural acoustic codes, modeling techniques and optimization methods used in the prediction of aircraft interior noise. Finite element models were developed for the components of the ATC based on the geometric, structural and material properties of the physical test structure. Numerically predicted modal frequencies for the longitudinal stringer, ring frame and dome component models, and six assembled ATC configurations were compared with experimental modal survey data. The finite element models were updated and refined, using physical parameters, to increase correlation with the measured modal data. Excellent agreement, within an average 1.5% to 2.9%, was obtained between the predicted and measured modal frequencies of the stringer, frame and dome components. The predictions for the modal frequencies of the assembled component Configurations I through V were within an average 2.9% and 9.1%. Finite element modal analyses were performed for comparison with 3 psi and 6 psi internal pressurization conditions in Configuration VI. The modal frequencies were predicted by applying differential stiffness to the elements with pressure loading and creating reduced matrices for beam elements with offsets inside external superelements. The average disagreement between the measured and predicted differences for the 0 psi and 6 psi internal pressure conditions was less than 0.5%. Comparably good agreement was obtained for the differences between the 0 psi and 3 psi measured and predicted internal pressure conditions.

  9. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  10. Safety envelope for load tolerance of structural element design based on multi-stage testing

    DOE PAGES

    Park, Chanyoung; Kim, Nam H.

    2016-09-06

    Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less

  11. 3D digital image correlation methods for full-field vibration measurement

    NASA Astrophysics Data System (ADS)

    Helfrick, Mark N.; Niezrecki, Christopher; Avitabile, Peter; Schmidt, Timothy

    2011-04-01

    In the area of modal test/analysis/correlation, significant effort has been expended over the past twenty years in order to make reduced models and to expand test data for correlation and eventual updating of the finite element models. This has been restricted by vibration measurements which are traditionally limited to the location of relatively few applied sensors. Advances in computers and digital imaging technology have allowed 3D digital image correlation (DIC) methods to measure the shape and deformation of a vibrating structure. This technique allows for full-field measurement of structural response, thus providing a wealth of simultaneous test data. This paper presents some preliminary results for the test/analysis/correlation of data measured using the DIC approach along with traditional accelerometers and a scanning laser vibrometer for comparison to a finite element model. The results indicate that all three approaches correlated well with the finite element model and provide validation for the DIC approach for full-field vibration measurement. Some of the advantages and limitations of the technique are presented and discussed.

  12. Aeroservoelastic Uncertainty Model Identification from Flight Data

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    2001-01-01

    Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.

  13. Updated Reference Model for Heat Generation in the Lithosphere

    NASA Astrophysics Data System (ADS)

    Wipperfurth, S. A.; Sramek, O.; Roskovec, B.; Mantovani, F.; McDonough, W. F.

    2017-12-01

    Models integrating geophysics and geochemistry allow for characterization of the Earth's heat budget and geochemical evolution. Global lithospheric geophysical models are now constrained by surface and body wave data and are classified into several unique tectonic types. Global lithospheric geochemical models have evolved from petrological characterization of layers to a combination of petrologic and seismic constraints. Because of these advances regarding our knowledge of the lithosphere, it is necessary to create an updated chemical and physical reference model. We are developing a global lithospheric reference model based on LITHO1.0 (segmented into 1°lon x 1°lat x 9-layers) and seismological-geochemical relationships. Uncertainty assignments and correlations are assessed for its physical attributes, including layer thickness, Vp and Vs, and density. This approach yields uncertainties for the masses of the crust and lithospheric mantle. Heat producing element abundances (HPE: U, Th, and K) are ascribed to each volume element. These chemical attributes are based upon the composition of subducting sediment (sediment layers), composition of surface rocks (upper crust), a combination of petrologic and seismic correlations (middle and lower crust), and a compilation of xenolith data (lithospheric mantle). The HPE abundances are correlated within each voxel, but not vertically between layers. Efforts to provide correlation of abundances horizontally between each voxel are discussed. These models are used further to critically evaluate the bulk lithosphere heat production in the continents and the oceans. Cross-checks between our model and results from: 1) heat flux (Artemieva, 2006; Davies, 2013; Cammarano and Guerri, 2017), 2) gravity (Reguzzoni and Sampietro, 2015), and 3) geochemical and petrological models (Rudnick and Gao, 2014; Hacker et al. 2015) are performed.

  14. A framework for correcting brain retraction based on an eXtended Finite Element Method using a laser range scanner.

    PubMed

    Li, Ping; Wang, Weiwei; Song, Zhijian; An, Yong; Zhang, Chenxi

    2014-07-01

    Brain retraction causes great distortion that limits the accuracy of an image-guided neurosurgery system that uses preoperative images. Therefore, brain retraction correction is an important intraoperative clinical application. We used a linear elastic biomechanical model, which deforms based on the eXtended Finite Element Method (XFEM) within a framework for brain retraction correction. In particular, a laser range scanner was introduced to obtain a surface point cloud of the exposed surgical field including retractors inserted into the brain. A brain retraction surface tracking algorithm converted these point clouds into boundary conditions applied to XFEM modeling that drive brain deformation. To test the framework, we performed a brain phantom experiment involving the retraction of tissue. Pairs of the modified Hausdorff distance between Canny edges extracted from model-updated images, pre-retraction, and post-retraction CT images were compared to evaluate the morphological alignment of our framework. Furthermore, the measured displacements of beads embedded in the brain phantom and the predicted ones were compared to evaluate numerical performance. The modified Hausdorff distance of 19 pairs of images decreased from 1.10 to 0.76 mm. The forecast error of 23 stainless steel beads in the phantom was between 0 and 1.73 mm (mean 1.19 mm). The correction accuracy varied between 52.8 and 100 % (mean 81.4 %). The results demonstrate that the brain retraction compensation can be incorporated intraoperatively into the model-updating process in image-guided neurosurgery systems.

  15. Tribute to the contribution of Gerard Lallenment to structural dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Los Alamos National Laboratory

    The Society for Experimental Mechanics and the International Modal Analysis Conference recognize the remarkable contribution to experimental mechanics, mechanical engineering and structural dynamics of Professor Gerard Lallement, from the University of Franche-Comte, France. A special session is organized during the IMAC-XX to outline the many achievements of Gerard Lallement in the fields of modal analysis, structural system identification, the theory and practice of structural modification, component mode synthesis and finite element model updating. The purpose of this publication is not to provide an exhaustive account of Gerard Lallement's contribution to structural dynamics. Numerous references are provided that should help themore » interested reader learn more about the many aspects of his work. Instead, the significance of this work is illustrated by discussing the role of structural dynamics in industrial applications and its future challenges. The technical aspects of Gerard Lallement's work are illustrated with a discussion of structural modification, modeling error localization and model updating.« less

  16. Increasing Update Rates in the Building Walkthrough System with Automatic Model-Space Subdivision and Potentially Visible Set Calculations

    DTIC Science & Technology

    1990-07-01

    34 ACM Computing Surveys. 6(1): 1- 55. [Syzmanski85] Syzmanski, T. G. and C. J. V. Wyk. (1985). " GOALIE : A Space Efficient System for VLSI Artwork...this. Essentially we initialize a stack with the root. We then pull an element of this stack and if it is a cell we run the occlusion operation on the

  17. Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  18. Modal test/analysis correlation of Space Station structures using nonlinear sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  19. Influence of Installation Effects on Pile Bearing Capacity in Cohesive Soils - Large Deformation Analysis Via Finite Element Method

    NASA Astrophysics Data System (ADS)

    Konkol, Jakub; Bałachowski, Lech

    2017-03-01

    In this paper, the whole process of pile construction and performance during loading is modelled via large deformation finite element methods such as Coupled Eulerian Lagrangian (CEL) and Updated Lagrangian (UL). Numerical study consists of installation process, consolidation phase and following pile static load test (SLT). The Poznań site is chosen as the reference location for the numerical analysis, where series of pile SLTs have been performed in highly overconsolidated clay (OCR ≈ 12). The results of numerical analysis are compared with corresponding field tests and with so-called "wish-in-place" numerical model of pile, where no installation effects are taken into account. The advantages of using large deformation numerical analysis are presented and its application to the pile designing is shown.

  20. Comparison of NASTRAN analysis with ground vibration results of UH-60A NASA/AEFA test configuration

    NASA Technical Reports Server (NTRS)

    Idosor, Florentino; Seible, Frieder

    1990-01-01

    Preceding program flight tests, a ground vibration test and modal test analysis of a UH-60A Black Hawk helicopter was conducted by Sikorsky Aircraft to complement the UH-60A test plan and NASA/ARMY Modern Technology Rotor Airloads Program. The 'NASA/AEFA' shake test configuration was tested for modal frequencies and shapes and compared with its NASTRAN finite element model counterpart to give correlative results. Based upon previous findings, significant differences in modal data existed and were attributed to assumptions regarding the influence of secondary structure contributions in the preliminary NASTRAN modeling. An analysis of an updated finite element model including several secondary structural additions has confirmed that the inclusion of specific secondary components produces a significant effect on modal frequency and free-response shapes and improves correlations at lower frequencies with shake test data.

  1. Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix

    NASA Astrophysics Data System (ADS)

    Hagen, V. S.; Arntsen, B.; Raknes, E. B.

    2017-12-01

    Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.

  2. A meta-model analysis of a finite element simulation for defining poroelastic properties of intervertebral discs.

    PubMed

    Nikkhoo, Mohammad; Hsu, Yu-Chun; Haghpanahi, Mohammad; Parnianpour, Mohamad; Wang, Jaw-Lin

    2013-06-01

    Finite element analysis is an effective tool to evaluate the material properties of living tissue. For an interactive optimization procedure, the finite element analysis usually needs many simulations to reach a reasonable solution. The meta-model analysis of finite element simulation can be used to reduce the computation of a structure with complex geometry or a material with composite constitutive equations. The intervertebral disc is a complex, heterogeneous, and hydrated porous structure. A poroelastic finite element model can be used to observe the fluid transferring, pressure deviation, and other properties within the disc. Defining reasonable poroelastic material properties of the anulus fibrosus and nucleus pulposus is critical for the quality of the simulation. We developed a material property updating protocol, which is basically a fitting algorithm consisted of finite element simulations and a quadratic response surface regression. This protocol was used to find the material properties, such as the hydraulic permeability, elastic modulus, and Poisson's ratio, of intact and degenerated porcine discs. The results showed that the in vitro disc experimental deformations were well fitted with limited finite element simulations and a quadratic response surface regression. The comparison of material properties of intact and degenerated discs showed that the hydraulic permeability significantly decreased but Poisson's ratio significantly increased for the degenerated discs. This study shows that the developed protocol is efficient and effective in defining material properties of a complex structure such as the intervertebral disc.

  3. A preliminary deposit model for lithium-cesium-tantalum (LCT) pegmatites

    USGS Publications Warehouse

    Bradley, Dwight; McCauley, Andrew

    2013-01-01

    This report is part of an effort by the U.S. Geological Survey to update existing mineral deposit models and to develop new ones. We emphasize practical aspects of pegmatite geology that might directly or indirectly help in exploration for lithium-cesium-tantalum (LCT) pegmatites, or for assessing regions for pegmatite-related mineral resource potential. These deposits are an important link in the world’s supply chain of rare and strategic elements, accounting for about one-third of world lithium production, most of the tantalum, and all of the cesium.

  4. Convergence in High Probability of the Quantum Diffusion in a Random Band Matrix Model

    NASA Astrophysics Data System (ADS)

    Margarint, Vlad

    2018-06-01

    We consider Hermitian random band matrices H in d ≥slant 1 dimensions. The matrix elements H_{xy}, indexed by x, y \\in Λ \\subset Z^d, are independent, uniformly distributed random variable if |x-y| is less than the band width W, and zero otherwise. We update the previous results of the converge of quantum diffusion in a random band matrix model from convergence of the expectation to convergence in high probability. The result is uniformly in the size |Λ| of the matrix.

  5. Predicting Epidemic Risk from Past Temporal Contact Data

    PubMed Central

    Valdano, Eugenio; Poletto, Chiara; Giovannini, Armando; Palma, Diana; Savini, Lara; Colizza, Vittoria

    2015-01-01

    Understanding how epidemics spread in a system is a crucial step to prevent and control outbreaks, with broad implications on the system’s functioning, health, and associated costs. This can be achieved by identifying the elements at higher risk of infection and implementing targeted surveillance and control measures. One important ingredient to consider is the pattern of disease-transmission contacts among the elements, however lack of data or delays in providing updated records may hinder its use, especially for time-varying patterns. Here we explore to what extent it is possible to use past temporal data of a system’s pattern of contacts to predict the risk of infection of its elements during an emerging outbreak, in absence of updated data. We focus on two real-world temporal systems; a livestock displacements trade network among animal holdings, and a network of sexual encounters in high-end prostitution. We define the node’s loyalty as a local measure of its tendency to maintain contacts with the same elements over time, and uncover important non-trivial correlations with the node’s epidemic risk. We show that a risk assessment analysis incorporating this knowledge and based on past structural and temporal pattern properties provides accurate predictions for both systems. Its generalizability is tested by introducing a theoretical model for generating synthetic temporal networks. High accuracy of our predictions is recovered across different settings, while the amount of possible predictions is system-specific. The proposed method can provide crucial information for the setup of targeted intervention strategies. PMID:25763816

  6. WARP3D-Release 10.8: Dynamic Nonlinear Analysis of Solids using a Preconditioned Conjugate Gradient Software Architecture

    NASA Technical Reports Server (NTRS)

    Koppenhoefer, Kyle C.; Gullerud, Arne S.; Ruggieri, Claudio; Dodds, Robert H., Jr.; Healy, Brian E.

    1998-01-01

    This report describes theoretical background material and commands necessary to use the WARP3D finite element code. WARP3D is under continuing development as a research code for the solution of very large-scale, 3-D solid models subjected to static and dynamic loads. Specific features in the code oriented toward the investigation of ductile fracture in metals include a robust finite strain formulation, a general J-integral computation facility (with inertia, face loading), an element extinction facility to model crack growth, nonlinear material models including viscoplastic effects, and the Gurson-Tver-gaard dilatant plasticity model for void growth. The nonlinear, dynamic equilibrium equations are solved using an incremental-iterative, implicit formulation with full Newton iterations to eliminate residual nodal forces. The history integration of the nonlinear equations of motion is accomplished with Newmarks Beta method. A central feature of WARP3D involves the use of a linear-preconditioned conjugate gradient (LPCG) solver implemented in an element-by-element format to replace a conventional direct linear equation solver. This software architecture dramatically reduces both the memory requirements and CPU time for very large, nonlinear solid models since formation of the assembled (dynamic) stiffness matrix is avoided. Analyses thus exhibit the numerical stability for large time (load) steps provided by the implicit formulation coupled with the low memory requirements characteristic of an explicit code. In addition to the much lower memory requirements of the LPCG solver, the CPU time required for solution of the linear equations during each Newton iteration is generally one-half or less of the CPU time required for a traditional direct solver. All other computational aspects of the code (element stiffnesses, element strains, stress updating, element internal forces) are implemented in the element-by- element, blocked architecture. This greatly improves vectorization of the code on uni-processor hardware and enables straightforward parallel-vector processing of element blocks on multi-processor hardware.

  7. Identification and calibration of the structural model of historical masonry building damaged during the 2016 Italian earthquakes: The case study of Palazzo del Podestà in Montelupone

    NASA Astrophysics Data System (ADS)

    Catinari, Federico; Pierdicca, Alessio; Clementi, Francesco; Lenci, Stefano

    2017-11-01

    The results of an ambient-vibration based investigation conducted on the "Palazzo del Podesta" in Montelupone (Italy) is presented. The case study was damaged during the 20I6 Italian earthquakes that stroke the central part of the Italy. The assessment procedure includes full-scale ambient vibration testing, modal identification from ambient vibration responses, finite element modeling and dynamic-based identification of the uncertain structural parameters of the model. A very good match between theoretical and experimental modal parameters was reached and the model updating has been performed identifying some structural parameters.

  8. An orthotropic viscoelastic model for the passive myocardium: continuum basis and numerical treatment.

    PubMed

    Gültekin, Osman; Sommer, Gerhard; Holzapfel, Gerhard A

    2016-11-01

    This study deals with the viscoelastic constitutive modeling and the respective computational analysis of the human passive myocardium. We start by recapitulating the locally orthotropic inner structure of the human myocardial tissue and model the mechanical response through invariants and structure tensors associated with three orthonormal basis vectors. In accordance with recent experimental findings the ventricular myocardial tissue is assumed to be incompressible, thick-walled, orthotropic and viscoelastic. In particular, one spring element coupled with Maxwell elements in parallel endows the model with viscoelastic features such that four dashpots describe the viscous response due to matrix, fiber, sheet and fiber-sheet fragments. In order to alleviate the numerical obstacles, the strictly incompressible model is altered by decomposing the free-energy function into volumetric-isochoric elastic and isochoric-viscoelastic parts along with the multiplicative split of the deformation gradient which enables the three-field mixed finite element method. The crucial aspect of the viscoelastic formulation is linked to the rate equations of the viscous overstresses resulting from a 3-D analogy of a generalized 1-D Maxwell model. We provide algorithmic updates for second Piola-Kirchhoff stress and elasticity tensors. In the sequel, we address some numerical aspects of the constitutive model by applying it to elastic, cyclic and relaxation test data obtained from biaxial extension and triaxial shear tests whereby we assess the fitting capacity of the model. With the tissue parameters identified, we conduct (elastic and viscoelastic) finite element simulations for an ellipsoidal geometry retrieved from a human specimen.

  9. 77 FR 66483 - Public Comment on the Draft Federal Urban Design Element and the Draft Update to the Federal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-05

    ... NCPC review required by law. The new Federal Urban Design Element provides policies that will guide the... Historic Features Element will be available online at http://www.ncpc.gov/compplan not later than November...

  10. Modelling of Equilibrium Between Mantle and Core: Refractory, Volatile, and Highly Siderophile Elements

    NASA Technical Reports Server (NTRS)

    Righter, K.; Danielson, L.; Pando, K.; Shofner, G.; Lee, C. -T.

    2013-01-01

    Siderophile elements have been used to constrain conditions of core formation and differentiation for the Earth, Mars and other differentiated bodies [1]. Recent models for the Earth have concluded that the mantle and core did not fully equilibrate and the siderophile element contents of the mantle can only be explained under conditions where the oxygen fugacity changes from low to high during accretion and the mantle and core do not fully equilibrate [2,3]. However these conclusions go against several physical and chemical constraints. First, calculations suggest that even with the composition of accreting material changing from reduced to oxidized over time, the fO2 defined by metal-silicate equilibrium does not change substantially, only by approximately 1 logfO2 unit [4]. An increase of more than 2 logfO2 units in mantle oxidation are required in models of [2,3]. Secondly, calculations also show that metallic impacting material will become deformed and sheared during accretion to a large body, such that it becomes emulsified to a fine scale that allows equilibrium at nearly all conditions except for possibly the length scale for giant impacts [5] (contrary to conclusions of [6]). Using new data for D(Mo) metal/silicate at high pressures, together with updated partitioning expressions for many other elements, we will show that metal-silicate equilibrium across a long span of Earth s accretion history may explain the concentrations of many siderophile elements in Earth's mantle. The modeling includes refractory elements Ni, Co, Mo, and W, as well as highly siderophile elements Au, Pd and Pt, and volatile elements Cd, In, Bi, Sb, Ge and As.

  11. Primal-mixed formulations for reaction-diffusion systems on deforming domains

    NASA Astrophysics Data System (ADS)

    Ruiz-Baier, Ricardo

    2015-10-01

    We propose a finite element formulation for a coupled elasticity-reaction-diffusion system written in a fully Lagrangian form and governing the spatio-temporal interaction of species inside an elastic, or hyper-elastic body. A primal weak formulation is the baseline model for the reaction-diffusion system written in the deformed domain, and a finite element method with piecewise linear approximations is employed for its spatial discretization. On the other hand, the strain is introduced as mixed variable in the equations of elastodynamics, which in turn acts as coupling field needed to update the diffusion tensor of the modified reaction-diffusion system written in a deformed domain. The discrete mechanical problem yields a mixed finite element scheme based on row-wise Raviart-Thomas elements for stresses, Brezzi-Douglas-Marini elements for displacements, and piecewise constant pressure approximations. The application of the present framework in the study of several coupled biological systems on deforming geometries in two and three spatial dimensions is discussed, and some illustrative examples are provided and extensively analyzed.

  12. Wide-field Imaging System and Rapid Direction of Optical Zoom (WOZ)

    DTIC Science & Technology

    2010-09-25

    commercial software packages: SolidWorks, COMSOL Multiphysics, and ZEMAX optical design. SolidWorks is a computer aided design package, which as a live...interface to COMSOL. COMSOL is a finite element analysis/partial differential equation solver. ZEMAX is an optical design package. Both COMSOL and... ZEMAX have live interfaces to MatLab. Our initial investigations have enabled a model in SolidWorks to be updated in COMSOL, an FEA calculation

  13. Consensus statement on an updated core communication curriculum for UK undergraduate medical education.

    PubMed

    Noble, Lorraine M; Scott-Smith, Wesley; O'Neill, Bernadette; Salisbury, Helen

    2018-04-22

    Clinical communication is a core component of undergraduate medical training. A consensus statement on the essential elements of the communication curriculum was co-produced in 2008 by the communication leads of UK medical schools. This paper discusses the relational, contextual and technological changes which have affected clinical communication since then and presents an updated curriculum for communication in undergraduate medicine. The consensus was developed through an iterative consultation process with the communication leads who represent their medical schools on the UK Council of Clinical Communication in Undergraduate Medical Education. The updated curriculum defines the underpinning values, core components and skills required within the context of contemporary medical care. It incorporates the evolving relational issues associated with the more prominent role of the patient in the consultation, reflected through legal precedent and changing societal expectations. The impact on clinical communication of the increased focus on patient safety, the professional duty of candour and digital medicine are discussed. Changes in the way medicine is practised should lead rapidly to adjustments to the content of curricula. The updated curriculum provides a model of best practice to help medical schools develop their teaching and argue for resources. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  15. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  16. 42 CFR 82.33 - How will NIOSH inform the public of changes to the scientific elements underlying the dose...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... scientific elements underlying the dose reconstruction process? 82.33 Section 82.33 Public Health PUBLIC... RELATED ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Updating the Scientific Elements Underlying Dose Reconstructions...

  17. 42 CFR 82.33 - How will NIOSH inform the public of changes to the scientific elements underlying the dose...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... scientific elements underlying the dose reconstruction process? 82.33 Section 82.33 Public Health PUBLIC... RELATED ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Updating the Scientific Elements Underlying Dose Reconstructions...

  18. 42 CFR 82.33 - How will NIOSH inform the public of changes to the scientific elements underlying the dose...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... scientific elements underlying the dose reconstruction process? 82.33 Section 82.33 Public Health PUBLIC... RELATED ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Updating the Scientific Elements Underlying Dose Reconstructions...

  19. 42 CFR 82.33 - How will NIOSH inform the public of changes to the scientific elements underlying the dose...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... scientific elements underlying the dose reconstruction process? 82.33 Section 82.33 Public Health PUBLIC... RELATED ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Updating the Scientific Elements Underlying Dose Reconstructions...

  20. 75 FR 5351 - Proposed Revisions to Accounting Guide for LSC Recipients

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-02

    ... elements of an adequate accounting and financial reporting system, including the use of specific internal... checklist of accounting procedures and internal controls. The proposed revisions update the checklist to... accounting procedures and internal controls to reflect current best practices; (7) updated and new references...

  1. Rare Earth Element Geochemistry for Produced Waters, WY

    DOE Data Explorer

    Quillinan, Scott; Nye, Charles; McLing, Travis; Neupane, Hari

    2016-06-30

    These data represent major, minor, trace, isotopes, and rare earth element concentrations in geologic formations and water associated with oil and gas production. *Note - Link below contains updated version of spreadsheet (6/14/2017)

  2. Walking through doorways causes forgetting: Event structure or updating disruption?

    PubMed

    Pettijohn, Kyle A; Radvansky, Gabriel A

    2016-11-01

    According to event cognition theory, people segment experience into separate event models. One consequence of this segmentation is that when people transport objects from one location to another, memory is worse than if people move across a large location. In two experiments participants navigated through a virtual environment, and recognition memory was tested in either the presence or the absence of a location shift for objects that were recently interacted with (i.e., just picked up or set down). Of particular concern here is whether this location updating effect is due to (a) differences in retention intervals as a result of the navigation process, (b) a temporary disruption in cognitive processing that may occur as a result of the updating processes, or (c) a need to manage multiple event models, as has been suggested in prior research. Experiment 1 explored whether retention interval is driving this effect by recording travel times from the acquisition of an object and the probe time. The results revealed that travel times were similar, thereby rejecting a retention interval explanation. Experiment 2 explored whether a temporary disruption in processing is producing the effect by introducing a 3-second delay prior to the presentation of a memory probe. The pattern of results was not affected by adding a delay, thereby rejecting a temporary disruption account. These results are interpreted in the context of the event horizon model, which suggests that when there are multiple event models that contain common elements there is interference at retrieval, which compromises performance.

  3. Boundary condition identification for a grid model by experimental and numerical dynamic analysis

    NASA Astrophysics Data System (ADS)

    Mao, Qiang; Devitis, John; Mazzotti, Matteo; Bartoli, Ivan; Moon, Franklin; Sjoblom, Kurt; Aktan, Emin

    2015-04-01

    There is a growing need to characterize unknown foundations and assess substructures in existing bridges. It is becoming an important issue for the serviceability and safety of bridges as well as for the possibility of partial reuse of existing infrastructures. Within this broader contest, this paper investigates the possibility of identifying, locating and quantifying changes of boundary conditions, by leveraging a simply supported grid structure with a composite deck. Multi-reference impact tests are operated for the grid model and modification of one supporting bearing is done by replacing a steel cylindrical roller with a roller of compliant material. Impact based modal analysis provide global modal parameters such as damped natural frequencies, mode shapes and flexibility matrix that are used as indicators of boundary condition changes. An updating process combining a hybrid optimization algorithm and the finite element software suit ABAQUS is presented in this paper. The updated ABAQUS model of the grid that simulates the supporting bearing with springs is used to detect and quantify the change of the boundary conditions.

  4. Alkali trace elements in Gale crater, Mars, with ChemCam: Calibration update and geological implications

    NASA Astrophysics Data System (ADS)

    Payré, V.; Fabre, C.; Cousin, A.; Sautter, V.; Wiens, R. C.; Forni, O.; Gasnault, O.; Mangold, N.; Meslin, P.-Y.; Lasue, J.; Ollila, A.; Rapin, W.; Maurice, S.; Nachon, M.; Le Deit, L.; Lanza, N.; Clegg, S.

    2017-03-01

    The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantifications of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. These observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.

  5. Calibration and Validation of a Finite ELement Model of THor-K Anthropomorphic Test Device for Aerospace Safety Applications

    NASA Technical Reports Server (NTRS)

    Putnam, J. B.; Unataroiu, C. D.; Somers, J. T.

    2014-01-01

    The THOR anthropomorphic test device (ATD) has been developed and continuously improved by the National Highway Traffic Safety Administration to provide automotive manufacturers an advanced tool that can be used to assess the injury risk of vehicle occupants in crash tests. Recently, a series of modifications were completed to improve the biofidelity of THOR ATD [1]. The updated THOR Modification Kit (THOR-K) ATD was employed at Wright-Patterson Air Base in 22 impact tests in three configurations: vertical, lateral, and spinal [2]. Although a computational finite element (FE) model of the THOR had been previously developed [3], updates to the model were needed to incorporate the recent changes in the modification kit. The main goal of this study was to develop and validate a FE model of the THOR-K ATD. The CAD drawings of the THOR-K ATD were reviewed and FE models were developed for the updated parts. For example, the head-skin geometry was found to change significantly, so its model was re-meshed (Fig. 1a). A protocol was developed to calibrate each component identified as key to the kinematic and kinetic response of the THOR-K head/neck ATD FE model (Fig. 1b). The available ATD tests were divided in two groups: a) calibration tests where the unknown material parameters of deformable parts (e.g., head skin, pelvis foam) were optimized to match the data and b) validation tests where the model response was only compared with test data by calculating their score using CORrelation and Analysis (CORA) rating system. Finally, the whole ATD model was validated under horizontal-, vertical-, and lateral-loading conditions against data recorded in the Wright Patterson tests [2]. Overall, the final THOR-K ATD model developed in this study is shown to respond similarly to the ATD in all validation tests. This good performance indicates that the optimization performed during calibration by using the CORA score as objective function is not test specific. Therefore confidence is provided in the ATD model for uses in predicting response in test conditions not performed in this study such those observed in the spacecraft landing. Comparison studies with ATD and human models may also be performed to contribute to future changes in THOR ATD design in an effort to improve its biofidelity, which has been traditionally based on post-mortem human subject testing and designer experience.

  6. 2013 Occupant Protection Risk Standing Review Panel Status Review Comments to the Human Research Program, Chief Scientist

    NASA Technical Reports Server (NTRS)

    Steinberg, Susan

    2014-01-01

    On December 17, 2013, the OP Risk SRP, participants from the JSC, HQ, and NRESS participated in a WebEx/teleconference. The purpose of the call was to allow the SRP members to: 1. Receive an update by the Human Research Program (HRP) Chief Scientist or Deputy Chief Scientist on the status of NASA's current and future exploration plans and the impact these will have on the HRP. 2. Receive an update on any changes within the HRP since the 2012 SRP meeting. 3. Receive an update by the Element or Project Scientist(s) on progress since the 2012 SRP meeting. 4. Participate in a discussion with the HRP Chief Scientist, Deputy Chief Scientist, and the Element regarding possible topics to be addressed at the next SRP meeting.

  7. Reporting of NSC Additional (A2) Data Elements. Updated July 29, 2014

    ERIC Educational Resources Information Center

    National Student Clearinghouse, 2014

    2014-01-01

    Since the 2008-09 academic year, the National Student Clearinghouse has provided its participating institutions with the option to include 13 additional data elements in their enrollment submissions. These additional data elements help make Clearinghouse data more comprehensive and enable StudentTracker? participants to utilize a more robust data…

  8. Hybrid adaptive ascent flight control for a flexible launch vehicle

    NASA Astrophysics Data System (ADS)

    Lefevre, Brian D.

    For the purpose of maintaining dynamic stability and improving guidance command tracking performance under off-nominal flight conditions, a hybrid adaptive control scheme is selected and modified for use as a launch vehicle flight controller. This architecture merges a model reference adaptive approach, which utilizes both direct and indirect adaptive elements, with a classical dynamic inversion controller. This structure is chosen for a number of reasons: the properties of the reference model can be easily adjusted to tune the desired handling qualities of the spacecraft, the indirect adaptive element (which consists of an online parameter identification algorithm) continually refines the estimates of the evolving characteristic parameters utilized in the dynamic inversion, and the direct adaptive element (which consists of a neural network) augments the linear feedback signal to compensate for any nonlinearities in the vehicle dynamics. The combination of these elements enables the control system to retain the nonlinear capabilities of an adaptive network while relying heavily on the linear portion of the feedback signal to dictate the dynamic response under most operating conditions. To begin the analysis, the ascent dynamics of a launch vehicle with a single 1st stage rocket motor (typical of the Ares 1 spacecraft) are characterized. The dynamics are then linearized with assumptions that are appropriate for a launch vehicle, so that the resulting equations may be inverted by the flight controller in order to compute the control signals necessary to generate the desired response from the vehicle. Next, the development of the hybrid adaptive launch vehicle ascent flight control architecture is discussed in detail. Alterations of the generic hybrid adaptive control architecture include the incorporation of a command conversion operation which transforms guidance input from quaternion form (as provided by NASA) to the body-fixed angular rate commands needed by the hybrid adaptive flight controller, development of a Newton's method based online parameter update that is modified to include a step size which regulates the rate of change in the parameter estimates, comparison of the modified Newton's method and recursive least squares online parameter update algorithms, modification of the neural network's input structure to accommodate for the nature of the nonlinearities present in a launch vehicle's ascent flight, examination of both tracking error based and modeling error based neural network weight update laws, and integration of feedback filters for the purpose of preventing harmful interaction between the flight control system and flexible structural modes. To validate the hybrid adaptive controller, a high-fidelity Ares I ascent flight simulator and a classical gain-scheduled proportional-integral-derivative (PID) ascent flight controller were obtained from the NASA Marshall Space Flight Center. The classical PID flight controller is used as a benchmark when analyzing the performance of the hybrid adaptive flight controller. Simulations are conducted which model both nominal and off-nominal flight conditions with structural flexibility of the vehicle either enabled or disabled. First, rigid body ascent simulations are performed with the hybrid adaptive controller under nominal flight conditions for the purpose of selecting the update laws which drive the indirect and direct adaptive components. With the neural network disabled, the results revealed that the recursive least squares online parameter update caused high frequency oscillations to appear in the engine gimbal commands. This is highly undesirable for long and slender launch vehicles, such as the Ares I, because such oscillation of the rocket nozzle could excite unstable structural flex modes. In contrast, the modified Newton's method online parameter update produced smooth control signals and was thus selected for use in the hybrid adaptive launch vehicle flight controller. In the simulations where the online parameter identification algorithm was disabled, the tracking error based neural network weight update law forced the network's output to diverge despite repeated reductions of the adaptive learning rate. As a result, the modeling error based neural network weight update law (which generated bounded signals) is utilized by the hybrid adaptive controller in all subsequent simulations. Comparing the PID and hybrid adaptive flight controllers under nominal flight conditions in rigid body ascent simulations showed that their tracking error magnitudes are similar for a period of time during the middle of the ascent phase. Though the PID controller performs better for a short interval around the 20 second mark, the hybrid adaptive controller performs far better from roughly 70 to 120 seconds. Elevating the aerodynamic loads by increasing the force and moment coefficients produced results very similar to the nominal case. However, applying a 5% or 10% thrust reduction to the first stage rocket motor causes the tracking error magnitude observed by the PID controller to be significantly elevated and diverge rapidly as the simulation concludes. In contrast, the hybrid adaptive controller steadily maintains smaller errors (often less than 50% of the corresponding PID value). Under the same sets of flight conditions with flexibility enabled, the results exhibit similar trends with the hybrid adaptive controller performing even better in each case. Again, the reduction of the first stage rocket motor's thrust clearly illustrated the superior robustness of the hybrid adaptive flight controller.

  9. Memory Updating and Mental Arithmetic

    PubMed Central

    Han, Cheng-Ching; Yang, Tsung-Han; Lin, Chia-Yuan; Yen, Nai-Shing

    2016-01-01

    Is domain-general memory updating ability predictive of calculation skills or are such skills better predicted by the capacity for updating specifically numerical information? Here, we used multidigit mental multiplication (MMM) as a measure for calculating skill as this operation requires the accurate maintenance and updating of information in addition to skills needed for arithmetic more generally. In Experiment 1, we found that only individual differences with regard to a task updating numerical information following addition (MUcalc) could predict the performance of MMM, perhaps owing to common elements between the task and MMM. In Experiment 2, new updating tasks were designed to clarify this: a spatial updating task with no numbers, a numerical task with no calculation, and a word task. The results showed that both MUcalc and the spatial task were able to predict the performance of MMM but only with the more difficult problems, while other updating tasks did not predict performance. It is concluded that relevant processes involved in updating the contents of working memory support mental arithmetic in adults. PMID:26869971

  10. Updates to Simulation of a Single-Element Lean-Direct Injection Combustor Using a Polyhedral Mesh Derived From Hanging-Node Elements

    NASA Technical Reports Server (NTRS)

    Wey, Changju Thomas; Liu, Nan-Suey

    2014-01-01

    This paper summarizes the procedures of inserting a thin-layer mesh to existing inviscid polyhedral mesh either with or without hanging-node elements as well as presents sample results from its applications to the numerical solution of a single-element LDI combustor using a releasable edition of the National Combustion Code (NCC).

  11. Updates to Simulation of a Single-Element Lean-Direct Injection Combustor Using a Polyhedral Mesh Derived from Hanging-Node Elements

    NASA Technical Reports Server (NTRS)

    Wey, Thomas; Liu, Nan-Suey

    2014-01-01

    This paper summarizes the procedures of inserting a thin-layer mesh to existing inviscid polyhedral mesh either with or without hanging-node elements as well as presents sample results from its applications to the numerical solution of a single-element LDI combustor using a releasable edition of the National Combustion Code (NCC).

  12. Operational Impact of Improved Space Tracking on Collision Avoidance in the Future LEO Space Debris Environment

    NASA Astrophysics Data System (ADS)

    Sibert, D.; Borgeson, D.; Peterson, G.; Jenkin, A.; Sorge, M.

    2010-09-01

    Even if global space policy successfully curtails on orbit explosions and ASAT demonstrations, studies indicate that the number of debris objects in Low Earth Orbit (LEO) will continue to grow solely from debris on debris collisions and debris generated from new launches. This study examines the threat posed by this growing space debris population over the next 30 years and how improvements in our space tracking capabilities can reduce the number of Collision Avoidance (COLA) maneuvers required keep the risk of operational satellite loss within tolerable limits. Particular focus is given to satellites operated by the Department of Defense (DoD) and Intelligence Community (IC) in Low Earth Orbit (LEO). The following debris field and space tracking performance parameters were varied parametrically in the experiment to study the impact on the number of collision avoidance maneuvers required: - Debris Field Density (by year 2009, 2019, 2029, and 2039) - Quality of Track Update (starting 1 sigma error ellipsoid) - Future Propagator Accuracy (error ellipsoid growth rates - Special Perturbations in 3 axes) - Track Update Rate for Debris (stochastic) - Track Update Rate for Payloads (stochastic) Baseline values matching present day tracking performance for quality of track update, propagator accuracy, and track update rate were derived by analyzing updates to the unclassified Satellite Catalog (SatCat). Track update rates varied significantly for active payloads and debris and as such we used different models for the track update rates for military payloads and debris. The analysis was conducted using the System Effectiveness Analysis Simulation (SEAS) an agent based model developed by the United States Air Force Space Command’s Space and Missile Systems Center to evaluate the military utility of space systems. The future debris field was modeled by The Aerospace Corporation using a tool chain which models the growth of the 10cm+ debris field using high fidelity propagation, collision, and breakup models. Our analysis uses Two Line Element (TLE) sets and surface area data generated by this model sampled at the years 2019, 2029, and 2039. Data for the 2009 debris field is taken from the unclassified SatCat. By using Monte Carlo simulation techniques and varying the epoch of the military constellation relative to the debris field we were able to remove the bias of initial conditions. Additional analysis was conducted looking at the military utility impact of temporarily losing the use of Intelligence Surveillance and Reconnaissance (ISR) assets due to COLA maneuvers during a large classified scenario with stressful satellite tasking. This paper and presentation will focus only on unclassified results quantifying the potential reduction in the risk assumed by satellite flyers, and the potential reduction in Delta-V usage that is possible if we are able to improve our tracking performance in any of these three areas and reduce the positional uncertainty of space objects at the time of closest approach.

  13. The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations

    NASA Astrophysics Data System (ADS)

    Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi; Salaris, Maurizio; Mucciarelli, Alessio; Savino, Alessandro; Aparicio, Antonio; Silva Aguirre, Victor; Verma, Kuldeep

    2018-04-01

    We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction opacities, a few nuclear reaction rates, bolometric corrections, and the treatment of the overshooting efficiency for shrinking convective cores. The new model calculations cover a mass range between 0.1 and 15 M ⊙, 22 initial chemical compositions between [Fe/H] = ‑3.20 and +0.45, with helium to metal enrichment ratio dY/dZ = 1.31. The isochrones cover an age range between 20 Myr and 14.5 Gyr, consistently take into account the pre-main-sequence phase, and have been translated to a large number of popular photometric systems. Asteroseismic properties of the theoretical models have also been calculated. We compare our isochrones with results from independent databases and with several sets of observations to test the accuracy of the calculations. All stellar evolution tracks, asteroseismic properties, and isochrones are made available through a dedicated web site.

  14. System and method for image registration of multiple video streams

    DOEpatents

    Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton

    2018-02-06

    Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.

  15. Experimental Validation of a Time-Accurate Finite Element Model for Coupled Multibody Dynamics and Liquid Sloshing

    DTIC Science & Technology

    2007-04-16

    velocity of the fluid mesh, P is the relative pressure, xr is the position vector, τ is the deviatoric stress tensor, D is the rate of deformation...corresponds to a slip factor of zero. The slip factor determines how much of the fluid and structure forces are mutually exchanged. Equations 22 and 23...updated from last to first. viii.Average the fluid pressure (This step eliminates the pressure checker-boarding effect and allows use of equal

  16. Realistic Fireteam Movement in Urban Environments

    DTIC Science & Technology

    2010-10-01

    00-2010 4 . TITLE AND SUBTITLE Realistic Fireteam Movement in Urban Environments 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...is largely consumed by the data transfer from the GPU to the CPU of the color and stencil buffers. Since this operation would only need to be...cost is given in table 4 . Waypoints Mean Std Dev 1112 1.25ms 0.09ms 3785 4.07ms 0.20ms Table 4 : Threat Probability Model update cost (Intel Q6600

  17. Application of Artificial Boundary Conditions in Sensitivity-Based Updating of Finite Element Models

    DTIC Science & Technology

    2007-06-01

    is known as the impedance matrix[ ]( )Z Ω . [ ] [ ] 1( ) ( )Z H −Ω = Ω (12) where [ ] 2( )Z K M j C ⎡ ⎤Ω = −Ω + Ω⎣ ⎦ (13) A. REDUCED ORDER...D.L. A correlation coefficient for modal vector analysis. Proceedings of 1st International Modal Analysis Conference, 1982, 110-116. Anton , H ... Rorres , C ., (2005). Elementary Linear Algebra. New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple

  18. Vibration assessment and structural monitoring of the Basilica of Maxentius in Rome

    NASA Astrophysics Data System (ADS)

    Pau, Annamaria; Vestroni, Fabrizio

    2013-12-01

    The present paper addresses the analysis of the ambient vibrations of the Basilica of Maxentius in Rome. This monument, in the city centre and close to busy roads, was the largest vaulted structure in the Roman Empire. Today, only one aisle of the structure remains, suffering from a complex crack scenario. The ambient vibration response is used to investigate traffic induced vibration and compare this to values that could be a potential cause of structural damage according to international standards. Using output-only methods, natural frequencies and mode shapes are obtained from the response, allowing comparison with predictions made with a finite element model. Notwithstanding simplifications regarding material behavior and crack pattern in the finite element model, an agreement between numerical and experimental results is reached once selected mechanical parameters are adjusted. A knowledge of modal characteristics and the availability of an updated model may be a first step of a structural monitoring program that could reveal any decay over time in the structural integrity of the monument.

  19. Patient-specific non-linear finite element modelling for predicting soft organ deformation in real-time: application to non-rigid neuroimage registration.

    PubMed

    Wittek, Adam; Joldes, Grand; Couton, Mathieu; Warfield, Simon K; Miller, Karol

    2010-12-01

    Long computation times of non-linear (i.e. accounting for geometric and material non-linearity) biomechanical models have been regarded as one of the key factors preventing application of such models in predicting organ deformation for image-guided surgery. This contribution presents real-time patient-specific computation of the deformation field within the brain for six cases of brain shift induced by craniotomy (i.e. surgical opening of the skull) using specialised non-linear finite element procedures implemented on a graphics processing unit (GPU). In contrast to commercial finite element codes that rely on an updated Lagrangian formulation and implicit integration in time domain for steady state solutions, our procedures utilise the total Lagrangian formulation with explicit time stepping and dynamic relaxation. We used patient-specific finite element meshes consisting of hexahedral and non-locking tetrahedral elements, together with realistic material properties for the brain tissue and appropriate contact conditions at the boundaries. The loading was defined by prescribing deformations on the brain surface under the craniotomy. Application of the computed deformation fields to register (i.e. align) the preoperative and intraoperative images indicated that the models very accurately predict the intraoperative deformations within the brain. For each case, computing the brain deformation field took less than 4 s using an NVIDIA Tesla C870 GPU, which is two orders of magnitude reduction in computation time in comparison to our previous study in which the brain deformation was predicted using a commercial finite element solver executed on a personal computer. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. 2013 Pharmacology Risk SRP Status Review Comments to Chief Scientist. The Risk of Clinically Relevant Unpredicted Effects of Medication

    NASA Technical Reports Server (NTRS)

    2014-01-01

    On December 5, 2013, the Pharmacology Risk SRP, participants from the JSC, HQ, the NSBRI, and NRESS participated in a WebEx/teleconference. The purpose of the call (as stated in the Statement of Task) was to allow the SRP members to: 1. Receive an update by the HRP Chief Scientist or Deputy Chief Scientist on the status of NASA's current and future exploration plans and the impact these will have on the HRP. 2. Receive an update on any changes within the HRP since the 2012 SRP meeting. 3. Receive an update by the Element or Project Scientist(s) on progress since the 2012 SRP meeting. 4. Participate in a discussion with the HRP Chief Scientist, Deputy Chief Scientist, and the Element regarding possible topics to be addressed at the next SRP meeting.

  1. Persuasive system design does matter: a systematic review of adherence to web-based interventions.

    PubMed

    Kelders, Saskia M; Kok, Robin N; Ossebaard, Hans C; Van Gemert-Pijnen, Julia E W C

    2012-11-14

    Although web-based interventions for promoting health and health-related behavior can be effective, poor adherence is a common issue that needs to be addressed. Technology as a means to communicate the content in web-based interventions has been neglected in research. Indeed, technology is often seen as a black-box, a mere tool that has no effect or value and serves only as a vehicle to deliver intervention content. In this paper we examine technology from a holistic perspective. We see it as a vital and inseparable aspect of web-based interventions to help explain and understand adherence. This study aims to review the literature on web-based health interventions to investigate whether intervention characteristics and persuasive design affect adherence to a web-based intervention. We conducted a systematic review of studies into web-based health interventions. Per intervention, intervention characteristics, persuasive technology elements and adherence were coded. We performed a multiple regression analysis to investigate whether these variables could predict adherence. We included 101 articles on 83 interventions. The typical web-based intervention is meant to be used once a week, is modular in set-up, is updated once a week, lasts for 10 weeks, includes interaction with the system and a counselor and peers on the web, includes some persuasive technology elements, and about 50% of the participants adhere to the intervention. Regarding persuasive technology, we see that primary task support elements are most commonly employed (mean 2.9 out of a possible 7.0). Dialogue support and social support are less commonly employed (mean 1.5 and 1.2 out of a possible 7.0, respectively). When comparing the interventions of the different health care areas, we find significant differences in intended usage (p=.004), setup (p<.001), updates (p<.001), frequency of interaction with a counselor (p<.001), the system (p=.003) and peers (p=.017), duration (F=6.068, p=.004), adherence (F=4.833, p=.010) and the number of primary task support elements (F=5.631, p=.005). Our final regression model explained 55% of the variance in adherence. In this model, a RCT study as opposed to an observational study, increased interaction with a counselor, more frequent intended usage, more frequent updates and more extensive employment of dialogue support significantly predicted better adherence. Using intervention characteristics and persuasive technology elements, a substantial amount of variance in adherence can be explained. Although there are differences between health care areas on intervention characteristics, health care area per se does not predict adherence. Rather, the differences in technology and interaction predict adherence. The results of this study can be used to make an informed decision about how to design a web-based intervention to which patients are more likely to adhere.

  2. Through-process modelling of texture and anisotropy in AA5182

    NASA Astrophysics Data System (ADS)

    Crumbach, M.; Neumann, L.; Goerdeler, M.; Aretz, H.; Gottstein, G.; Kopp, R.

    2006-07-01

    A through-process texture and anisotropy prediction for AA5182 sheet production from hot rolling through cold rolling and annealing is reported. Thermo-mechanical process data predicted by the finite element method (FEM) package T-Pack based on the software LARSTRAN were fed into a combination of physics based microstructure models for deformation texture (GIA), work hardening (3IVM), nucleation texture (ReNuc), and recrystallization texture (StaRT). The final simulated sheet texture was fed into a FEM simulation of cup drawing employing a new concept of interactively updated texture based yield locus predictions. The modelling results of texture development and anisotropy were compared to experimental data. The applicability to other alloys and processes is discussed.

  3. Alkali trace elements in Gale crater, Mars, with ChemCam: Calibration update and geological implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Payre, Valerie; Fabre, Cecile; Cousin, Agnes

    The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantificationsmore » of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. Here, these observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.« less

  4. Alkali trace elements in Gale crater, Mars, with ChemCam: Calibration update and geological implications

    DOE PAGES

    Payre, Valerie; Fabre, Cecile; Cousin, Agnes; ...

    2017-03-20

    The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantificationsmore » of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. Here, these observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.« less

  5. ODMedit: uniform semantic annotation for data integration in medicine based on a public metadata repository.

    PubMed

    Dugas, Martin; Meidt, Alexandra; Neuhaus, Philipp; Storck, Michael; Varghese, Julian

    2016-06-01

    The volume and complexity of patient data - especially in personalised medicine - is steadily increasing, both regarding clinical data and genomic profiles: Typically more than 1,000 items (e.g., laboratory values, vital signs, diagnostic tests etc.) are collected per patient in clinical trials. In oncology hundreds of mutations can potentially be detected for each patient by genomic profiling. Therefore data integration from multiple sources constitutes a key challenge for medical research and healthcare. Semantic annotation of data elements can facilitate to identify matching data elements in different sources and thereby supports data integration. Millions of different annotations are required due to the semantic richness of patient data. These annotations should be uniform, i.e., two matching data elements shall contain the same annotations. However, large terminologies like SNOMED CT or UMLS don't provide uniform coding. It is proposed to develop semantic annotations of medical data elements based on a large-scale public metadata repository. To achieve uniform codes, semantic annotations shall be re-used if a matching data element is available in the metadata repository. A web-based tool called ODMedit ( https://odmeditor.uni-muenster.de/ ) was developed to create data models with uniform semantic annotations. It contains ~800,000 terms with semantic annotations which were derived from ~5,800 models from the portal of medical data models (MDM). The tool was successfully applied to manually annotate 22 forms with 292 data items from CDISC and to update 1,495 data models of the MDM portal. Uniform manual semantic annotation of data models is feasible in principle, but requires a large-scale collaborative effort due to the semantic richness of patient data. A web-based tool for these annotations is available, which is linked to a public metadata repository.

  6. Spatio-Semantic Comparison of Large 3d City Models in Citygml Using a Graph Database

    NASA Astrophysics Data System (ADS)

    Nguyen, S. H.; Yao, Z.; Kolbe, T. H.

    2017-10-01

    A city may have multiple CityGML documents recorded at different times or surveyed by different users. To analyse the city's evolution over a given period of time, as well as to update or edit the city model without negating modifications made by other users, it is of utmost importance to first compare, detect and locate spatio-semantic changes between CityGML datasets. This is however difficult due to the fact that CityGML elements belong to a complex hierarchical structure containing multi-level deep associations, which can basically be considered as a graph. Moreover, CityGML allows multiple syntactic ways to define an object leading to syntactic ambiguities in the exchange format. Furthermore, CityGML is capable of including not only 3D urban objects' graphical appearances but also their semantic properties. Since to date, no known algorithm is capable of detecting spatio-semantic changes in CityGML documents, a frequent approach is to replace the older models completely with the newer ones, which not only costs computational resources, but also loses track of collaborative and chronological changes. Thus, this research proposes an approach capable of comparing two arbitrarily large-sized CityGML documents on both semantic and geometric level. Detected deviations are then attached to their respective sources and can easily be retrieved on demand. As a result, updating a 3D city model using this approach is much more efficient as only real changes are committed. To achieve this, the research employs a graph database as the main data structure for storing and processing CityGML datasets in three major steps: mapping, matching and updating. The mapping process transforms input CityGML documents into respective graph representations. The matching process compares these graphs and attaches edit operations on the fly. Found changes can then be executed using the Web Feature Service (WFS), the standard interface for updating geographical features across the web.

  7. The Status of School Finance Equity in Texas: A 2009 Update

    ERIC Educational Resources Information Center

    Cortez, Albert

    2009-01-01

    In Texas, all students are equal, but once again some are more equal than others. This policy update provides a description of the key elements of the existing Texas school funding system, identifies features that contribute to equity and those that maintain and expand inequity, and includes recommended reforms that would reinstate critical…

  8. Nonmarket economic values of forest insect pests: An updated literature review

    Treesearch

    Randall S. Rosenberger; Lauren A. Bell; Patricia A. Champ; Eric. L. Smith

    2012-01-01

    This report updates the literature review and synthesis of economic valuation studies on the impacts of forest insect pests by Rosenberger and Smith (1997). A conceptual framework is presented to establish context for the studies. This report also discusses the concept of ecosystem services; identifies key elements of each study; examines areas of future research; and...

  9. Report of the IAU Working Group on Cartographic Coordinates and Rotational Elements: 2015

    NASA Astrophysics Data System (ADS)

    Archinal, B. A.; Acton, C. H.; A'Hearn, M. F.; Conrad, A.; Consolmagno, G. J.; Duxbury, T.; Hestroffer, D.; Hilton, J. L.; Kirk, R. L.; Klioner, S. A.; McCarthy, D.; Meech, K.; Oberst, J.; Ping, J.; Seidelmann, P. K.; Tholen, D. J.; Thomas, P. C.; Williams, I. P.

    2018-03-01

    This report continues the practice where the IAU Working Group on Cartographic Coordinates and Rotational Elements revises recommendations regarding those topics for the planets, satellites, minor planets, and comets approximately every 3 years. The Working Group has now become a "functional working group" of the IAU, and its membership is open to anyone interested in participating. We describe the procedure for submitting questions about the recommendations given here or the application of these recommendations for creating a new or updated coordinate system for a given body. Regarding body orientation, the following bodies have been updated: Mercury, based on MESSENGER results; Mars, along with a refined longitude definition; Phobos; Deimos; (1) Ceres; (52) Europa; (243) Ida; (2867) Šteins; Neptune; (134340) Pluto and its satellite Charon; comets 9P/Tempel 1, 19P/Borrelly, 67P/Churyumov-Gerasimenko, and 103P/Hartley 2, noting that such information is valid only between specific epochs. The special challenges related to mapping 67P/Churyumov-Gerasimenko are also discussed. Approximate expressions for the Earth have been removed in order to avoid confusion, and the low precision series expression for the Moon's orientation has been removed. The previously online only recommended orientation model for (4) Vesta is repeated with an explanation of how it was updated. Regarding body shape, text has been included to explain the expected uses of such information, and the relevance of the cited uncertainty information. The size of the Sun has been updated, and notation added that the size and the ellipsoidal axes for the Earth and Jupiter have been recommended by an IAU Resolution. The distinction of a reference radius for a body (here, the Moon and Titan) is made between cartographic uses, and for orthoprojection and geophysical uses. The recommended radius for Mercury has been updated based on MESSENGER results. The recommended radius for Titan is returned to its previous value. Size information has been updated for 13 other Saturnian satellites and added for Aegaeon. The sizes of Pluto and Charon have been updated. Size information has been updated for (1) Ceres and given for (16) Psyche and (52) Europa. The size of (25143) Itokawa has been corrected. In addition, the discussion of terminology for the poles (hemispheres) of small bodies has been modified and a discussion on cardinal directions added. Although they continue to be used for planets and their satellites, it is assumed that the planetographic and planetocentric coordinate system definitions do not apply to small bodies. However, planetocentric and planetodetic latitudes and longitudes may be used on such bodies, following the right-hand rule. We repeat our previous recommendations that planning and efforts be made to make controlled cartographic products; newly recommend that common formulations should be used for orientation and size; continue to recommend that a community consensus be developed for the orientation models of Jupiter and Saturn; newly recommend that historical summaries of the coordinate systems for given bodies should be developed, and point out that for planets and satellites planetographic systems have generally been historically preferred over planetocentric systems, and that in cases when planetographic coordinates have been widely used in the past, there is no obvious advantage to switching to the use of planetocentric coordinates. The Working Group also requests community input on the question submitting process, posting of updates to the Working Group website, and on whether recommendations should be made regarding exoplanet coordinate systems.

  10. Recent Updates of A Multi-Phase Transport (AMPT) Model

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei

    2008-10-01

    We will present recent updates to the AMPT model, a Monte Carlo transport model for high energy heavy ion collisions, since its first public release in 2004 and the corresponding detailed descriptions in Phys. Rev. C 72, 064901 (2005). The updates often result from user requests. Some of these updates expand the physics processes or descriptions in the model, while some updates improve the usability of the model such as providing the initial parton distributions or help avoid crashes on some operating systems. We will also explain how the AMPT model is being maintained and updated.

  11. Atomic weights of the elements 1999

    USGS Publications Warehouse

    Coplen, T.B.

    2001-01-01

    The biennial review of atomic-weight, Ar(E), determinations and other cognate data have resulted in changes for the standard atomic weights of the following elements: from to nitrogen 14.006 74??0.000 07 14.0067??0.0002 sulfur 32.066??0.006 32.065??0.005 chlorine 35.4527??0.0009 35.453??0.002 germanium 72.61??0.02 72.64??0.01 xenon 131.29??0.02 131.293??0.006 erbium 167.26??0.03 167.259??0.003 uranium 238.0289??0.0001 238.028 91??0.000 03 Presented are updated tables of the standard atomic weights and their uncertainties estimated by combining experimental uncertainties and terrestrial variabilities. In addition, this report again contains an updated table of relative atomic mass values and half-lives of selected radioisotopes. Changes in the evaluated isotopic abundance values from those published in 1997 are so minor that an updated list will not be published for the year 1999. Many elements have a different isotopic composition in some nonterrestrial materials. Some recent data on parent nuclides that might affect isotopic abundances or atomic-weight values are included in this report for the information of the interested scientific community. ?? 2001 American Institute of Physics.

  12. Moving Particles Through a Finite Element Mesh

    PubMed Central

    Peskin, Adele P.; Hardin, Gary R.

    1998-01-01

    We present a new numerical technique for modeling the flow around multiple objects moving in a fluid. The method tracks the dynamic interaction between each particle and the fluid. The movements of the fluid and the object are directly coupled. A background mesh is designed to fit the geometry of the overall domain. The mesh is designed independently of the presence of the particles except in terms of how fine it must be to track particles of a given size. Each particle is represented by a geometric figure that describes its boundary. This figure overlies the mesh. Nodes are added to the mesh where the particle boundaries intersect the background mesh, increasing the number of nodes contained in each element whose boundary is intersected. These additional nodes are then used to describe and track the particle in the numerical scheme. Appropriate element shape functions are defined to approximate the solution on the elements with extra nodes. The particles are moved through the mesh by moving only the overlying nodes defining the particles. The regular finite element grid remains unchanged. In this method, the mesh does not distort as the particles move. Instead, only the placement of particle-defining nodes changes as the particles move. Element shape functions are updated as the nodes move through the elements. This method is especially suited for models of moderate numbers of moderate-size particles, where the details of the fluid-particle coupling are important. Both the complications of creating finite element meshes around appreciable numbers of particles, and extensive remeshing upon movement of the particles are simplified in this method. PMID:28009377

  13. Integration of a Finite Element Model with the DAP Bone Remodeling Model to Characterize Bone Response to Skeletal Loading

    NASA Technical Reports Server (NTRS)

    Werner, Christopher R.; Mulugeta, Lealem; Myers, J. G.; Pennline, J. A.

    2015-01-01

    NASA's Digital Astronaut Project (DAP) has developed a bone remodeling model that has been validated for predicting volumetric bone mineral density (vBMD) changes of trabecular and cortical bone in the absence of mechanical loading. The model was recently updated to include skeletal loading from exercise and free living activities to maintain healthy bone using a new daily load stimulus (DLS). This new formula was developed based on an extensive review of existing DLS formulas, as discussed in the abstract by Pennline et al. The DLS formula incorporated into the bone remodeling model utilizes strains and stress calculated from finite element model (FEM) of the bone region of interest. The proximal femur was selected for the initial application of the DLS formula, with a specific focus on the femoral neck. METHODS: The FEM was generated from CAD geometry of a femur using de-identified CT data. The femur was meshed using linear tetrahedral elements Figure (1) with higher mesh densities in the femoral neck region, which is the primary region of interest for the initial application of the DLS formula in concert with the DAP bone remodeling model. Nodal loads were applied to the femoral head and the greater trochanter and the base of the femur was held fixed. An L2 norm study was conducted to reduce the length of the femoral shaft without significantly impacting the stresses in the femoral neck. The material properties of the FEM of the proximal femur were separated between cortical and trabecular regions to work with the bone remodeling model. Determining the elements with cortical material properties in the FEM was based off of publicly available CT hip scans [4] that were segmented, cleaned, and overlaid onto the FEM.

  14. A constrained modulus reconstruction technique for breast cancer assessment.

    PubMed

    Samani, A; Bishop, J; Plewes, D B

    2001-09-01

    A reconstruction technique for breast tissue elasticity modulus is described. This technique assumes that the geometry of normal and suspicious tissues is available from a contrast-enhanced magnetic resonance image. Furthermore, it is assumed that the modulus is constant throughout each tissue volume. The technique, which uses quasi-static strain data, is iterative where each iteration involves modulus updating followed by stress calculation. Breast mechanical stimulation is assumed to be done by two compressional rigid plates. As a result, stress is calculated using the finite element method based on the well-controlled boundary conditions of the compression plates. Using the calculated stress and the measured strain, modulus updating is done element-by-element based on Hooke's law. Breast tissue modulus reconstruction using simulated data and phantom modulus reconstruction using experimental data indicate that the technique is robust.

  15. OOM - OBJECT ORIENTATION MANIPULATOR, VERSION 6.1

    NASA Technical Reports Server (NTRS)

    Goza, S. P.

    1994-01-01

    The Object Orientation Manipulator (OOM) is an application program for creating, rendering, and recording three-dimensional computer-generated still and animated images. This is done using geometrically defined 3D models, cameras, and light sources, referred to collectively as animation elements. OOM does not provide the tools necessary to construct 3D models; instead, it imports binary format model files generated by the Solid Surface Modeler (SSM). Model files stored in other formats must be converted to the SSM binary format before they can be used in OOM. SSM is available as MSC-21914 or as part of the SSM/OOM bundle, COS-10047. Among OOM's features are collision detection (with visual and audio feedback), the capability to define and manipulate hierarchical relationships between animation elements, stereographic display, and ray-traced rendering. OOM uses Euler angle transformations for calculating the results of translation and rotation operations. OOM provides an interactive environment for the manipulation and animation of models, cameras, and light sources. Models are the basic entity upon which OOM operates and are therefore considered the primary animation elements. Cameras and light sources are considered secondary animation elements. A camera, in OOM, is simply a location within the three-space environment from which the contents of the environment are observed. OOM supports the creation and full animation of cameras. Light sources can be defined, positioned and linked to models, but they cannot be animated independently. OOM can simultaneously accommodate as many animation elements as the host computer's memory permits. Once the required animation elements are present, the user may position them, orient them, and define any initial relationships between them. Once the initial relationships are defined, the user can display individual still views for rendering and output, or define motion for the animation elements by using the Interp Animation Editor. The program provides the capability to save still images, animated sequences of frames, and the information that describes the initialization process for an OOM session. OOM provides the same rendering and output options for both still and animated images. OOM is equipped with a robust model manipulation environment featuring a full screen viewing window, a menu-oriented user interface, and an interpolative Animation Editor. It provides three display modes: solid, wire frame, and simple, that allow the user to trade off visual authenticity for update speed. In the solid mode, each model is drawn based on the shading characteristics assigned to it when it was built. All of the shading characteristics supported by SSM are recognized and properly rendered in this mode. If increasing model complexity impedes the operation of OOM in this mode, then wireframe and simple modes are available. These provide substantially faster screen updates than solid mode. The creation and placement of cameras and light sources is under complete control of the user. One light source is provided in the default element set. It is modeled as a direct light source providing a type of lighting analogous to that provided by the Sun. OOM can accommodate as many light sources as the memory of the host computer permits. Animation is created in OOM using a technique called key frame interpolation. First, various program functions are used to load models, load or create light sources and cameras, and specify initial positions for each element. When these steps are completed, the Interp function is used to create an animation sequence for each element to be animated. An animation sequence consists of a user-defined number of frames (screen images) with some subset of those being defined as key frames. The motion of the element between key frames is interpolated automatically by the software. Key frames thus act as transition points in the motion of an element. This saves the user from having to individually define element data at each frame of a sequence. Animation frames and still images can be output to videotape recorders, film recorders, color printers, and disk files. OOM is written in C-language for implementation on SGI IRIS 4D series workstations running the IRIX operating system. A minimum of 8Mb of RAM is recommended for this program. The standard distribution medium for OOM is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. OOM is also offered as a bundle with a related program, SSM (Solid Surface Modeler). Please see the abstract for SSM/OOM (COS-10047) for information about the bundled package. OOM was released in 1993.

  16. Development of a core Clostridium thermocellum kinetic metabolic model consistent with multiple genetic perturbations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dash, Satyakam; Khodayari, Ali; Zhou, Jilai

    Background. Clostridium thermocellum is a Gram-positive anaerobe with the ability to hydrolyze and metabolize cellulose into biofuels such as ethanol, making it an attractive candidate for consolidated bioprocessing (CBP). At present, metabolic engineering in C. thermocellum is hindered due to the incomplete description of its metabolic repertoire and regulation within a predictive metabolic model. Genome-scale metabolic (GSM) models augmented with kinetic models of metabolism have been shown to be effective at recapitulating perturbed metabolic phenotypes. Results. In this effort, we first update a second-generation genome-scale metabolic model (iCth446) for C. thermocellum by correcting cofactor dependencies, restoring elemental and charge balances,more » and updating GAM and NGAM values to improve phenotype predictions. The iCth446 model is next used as a scaffold to develop a core kinetic model (k-ctherm118) of the C. thermocellum central metabolism using the Ensemble Modeling (EM) paradigm. Model parameterization is carried out by simultaneously imposing fermentation yield data in lactate, malate, acetate, and hydrogen production pathways for 19 measured metabolites spanning a library of 19 distinct single and multiple gene knockout mutants along with 18 intracellular metabolite concentration data for a Δgldh mutant and ten experimentally measured Michaelis–Menten kinetic parameters. Conclusions. The k-ctherm118 model captures significant metabolic changes caused by (1) nitrogen limitation leading to increased yields for lactate, pyruvate, and amino acids, and (2) ethanol stress causing an increase in intracellular sugar phosphate concentrations (~1.5-fold) due to upregulation of cofactor pools. Robustness analysis of k-ctherm118 alludes to the presence of a secondary activity of ketol-acid reductoisomerase and possible regulation by valine and/or leucine pool levels. In addition, cross-validation and robustness analysis allude to missing elements in k-ctherm118 and suggest additional experiments to improve kinetic model prediction fidelity. Overall, the study quantitatively assesses the advantages of EM-based kinetic modeling towards improved prediction of C. thermocellum metabolism and develops a predictive kinetic model which can be used to design biofuel-overproducing strains.« less

  17. Development of a core Clostridium thermocellum kinetic metabolic model consistent with multiple genetic perturbations

    DOE PAGES

    Dash, Satyakam; Khodayari, Ali; Zhou, Jilai; ...

    2017-05-02

    Background. Clostridium thermocellum is a Gram-positive anaerobe with the ability to hydrolyze and metabolize cellulose into biofuels such as ethanol, making it an attractive candidate for consolidated bioprocessing (CBP). At present, metabolic engineering in C. thermocellum is hindered due to the incomplete description of its metabolic repertoire and regulation within a predictive metabolic model. Genome-scale metabolic (GSM) models augmented with kinetic models of metabolism have been shown to be effective at recapitulating perturbed metabolic phenotypes. Results. In this effort, we first update a second-generation genome-scale metabolic model (iCth446) for C. thermocellum by correcting cofactor dependencies, restoring elemental and charge balances,more » and updating GAM and NGAM values to improve phenotype predictions. The iCth446 model is next used as a scaffold to develop a core kinetic model (k-ctherm118) of the C. thermocellum central metabolism using the Ensemble Modeling (EM) paradigm. Model parameterization is carried out by simultaneously imposing fermentation yield data in lactate, malate, acetate, and hydrogen production pathways for 19 measured metabolites spanning a library of 19 distinct single and multiple gene knockout mutants along with 18 intracellular metabolite concentration data for a Δgldh mutant and ten experimentally measured Michaelis–Menten kinetic parameters. Conclusions. The k-ctherm118 model captures significant metabolic changes caused by (1) nitrogen limitation leading to increased yields for lactate, pyruvate, and amino acids, and (2) ethanol stress causing an increase in intracellular sugar phosphate concentrations (~1.5-fold) due to upregulation of cofactor pools. Robustness analysis of k-ctherm118 alludes to the presence of a secondary activity of ketol-acid reductoisomerase and possible regulation by valine and/or leucine pool levels. In addition, cross-validation and robustness analysis allude to missing elements in k-ctherm118 and suggest additional experiments to improve kinetic model prediction fidelity. Overall, the study quantitatively assesses the advantages of EM-based kinetic modeling towards improved prediction of C. thermocellum metabolism and develops a predictive kinetic model which can be used to design biofuel-overproducing strains.« less

  18. Defining the system of care concept and philosophy: to update or not to update?

    PubMed

    Stroul, Beth A; Blau, Gary M

    2010-02-01

    This commentary considers the task of updating the system of care concept and philosophy within its historical context, reviewing the original intent of the definition and clarifying misconceptions about its meaning. The authors identify the aspects of the concept and philosophy that should be updated based on the latest thinking, experience, and data, such as incorporating applicability to a broader range of populations, increasing the emphasis on the core values, specifying desired outcomes, and adding accountability as a critical element. An updated definition and values and principles are proposed, and the importance of always presenting the definition along with the accompanying specification of the philosophy is emphasized in order to increase its utility in assisting the field to move from theory to practice.

  19. Situation Model Updating in Young and Older Adults: Global versus Incremental Mechanisms

    PubMed Central

    Bailey, Heather R.; Zacks, Jeffrey M.

    2015-01-01

    Readers construct mental models of situations described by text. Activity in narrative text is dynamic, so readers must frequently update their situation models when dimensions of the situation change. Updating can be incremental, such that a change leads to updating just the dimension that changed, or global, such that the entire model is updated. Here, we asked whether older and young adults make differential use of incremental and global updating. Participants read narratives containing changes in characters and spatial location and responded to recognition probes throughout the texts. Responses were slower when probes followed a change, suggesting that situation models were updated at changes. When either dimension changed, responses to probes for both dimensions were slowed; this provides evidence for global updating. Moreover, older adults showed stronger evidence of global updating than did young adults. One possibility is that older adults perform more global updating to offset reduced ability to manipulate information in working memory. PMID:25938248

  20. THE DEFINITION AND INTERPRETATION OF TERRESTRIAL ENVIRONMENT DESIGN INPUTS FOR VEHICLE DESIGN CONSIDERATIONS

    NASA Technical Reports Server (NTRS)

    Johnson, Dale L.; Keller, Vernon W.; Vaughan, William W.

    2005-01-01

    The description and interpretation of the terrestrial environment (0-90 km altitude) is an important driver of aerospace vehicle structural, control, and thermal system design. NASA is currently in the process of reviewing the meteorological information acquired over the past decade and producing an update to the 1993 Terrestrial Environment Guidelines for Aerospace Vehicle Design and Development handbook. This paper addresses the contents of this updated handbook, with special emphasis on new material being included in the areas of atmospheric thermodynamic models, wind dynamics, atmospheric composition, atmospheric electricity, cloud phenomena, atmospheric extremes, sea state, etc. In addition, the respective engineering design elements will be discussed relative to the importance and influence of terrestrial environment inputs that require consideration and interpretation for design applications. Specific lessons learned that have contributed to the advancements made in the acquisition, interpretation, application and awareness of terrestrial environment inputs for aerospace engineering applications are discussed.

  1. Nonlinear finite element simulation of non-local tension softening for high strength steel material

    NASA Astrophysics Data System (ADS)

    Tong, F. M.

    The capability of current finite element softwares in simulating the stress-strain relation beyond the elastic-plastic region has been limited by the inability for non- positivity in the computational finite elements' stiffness matrixes. Although analysis up to the peak stress has been proved adequate for analysis and design, it provides no indication of the possible failure predicament that is to follow. Therefore an attempt was made to develop a modelling technique capable of capturing the complete stress-deformation response in an analysis beyond the limit point. This proposed model characterizes a cyclic loading and unloading procedure, as observed in a typical laboratory uniaxial cyclic test, along with a series of material properties updates. The Voce equation and a polynomial function were proposed to define the monotonic elastoplastic hardening and softening behaviour respectively. A modified form of the Voce equation was used to capture the reloading response in the softening region. To accommodate the reduced load capacity of the material at each subsequent softening point, an optimization macro was written to control this optimum load at which the material could withstand. This preliminary study has ignored geometrical effect and is thus incapable of capturing the localized necking phenomenon that accompanies many ductile materials. The current softening model is sufficient if a global measure is considered. Several validation cases were performed to investigate the feasibility of the modelling technique and the results have been proved satisfactory. The ANSYS finite element software is used as the platform at which the modelling technique operates.

  2. Sentry: An Automated Close Approach Monitoring System for Near-Earth Objects

    NASA Astrophysics Data System (ADS)

    Chamberlin, A. B.; Chesley, S. R.; Chodas, P. W.; Giorgini, J. D.; Keesey, M. S.; Wimberly, R. N.; Yeomans, D. K.

    2001-11-01

    In response to international concern about potential asteroid impacts on Earth, NASA's Near-Earth Object (NEO) Program Office has implemented a new system called ``Sentry'' to automatically update the orbits of all NEOs on a daily basis and compute Earth close approaches up to 100 years into the future. Results are published on our web site (http://neo.jpl.nasa.gov/) and updated orbits and ephemerides made available via the JPL Horizons ephemeris service (http://ssd.jpl.nasa.gov/horizons.html). Sentry collects new and revised astrometric observations from the Minor Planet Center (MPC) via their electronic circulars (MPECs) in near real time as well as radar and optical astrometry sent directly from observers. NEO discoveries and identifications are detected in MPECs and processed appropriately. In addition to these daily updates, Sentry synchronizes with each monthly batch of MPC astrometry and automatically updates all NEO observation files. Daily and monthly processing of NEO astrometry is managed using a queuing system which allows for manual intervention of selected NEOs without interfering with the automatic system. At the heart of Sentry is a fully automatic orbit determination program which handles outlier rejection and ensures convergence in the new solution. Updated orbital elements and their covariances are published via Horizons and our NEO web site, typically within 24 hours. A new version of Horizons, in development, will allow computation of ephemeris uncertainties using covariance data. The positions of NEOs with updated orbits are numerically integrated up to 100 years into the future and each close approach to any perturbing body in our dynamic model (all planets, Moon, Ceres, Pallas, Vesta) is recorded. Significant approaches are flagged for extended analysis including Monte Carlo studies. Results, such as minimum encounter distances and future Earth impact probabilities, are published on our NEO web site.

  3. Update on Angles and Sides of the CKM Unitarity Triangle from BaBar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Chih-hsiang; /Caltech

    2011-11-14

    We report several recent updates from the BABAR Collaboration on the matrix elements |V{sub cb}|, |V{sub ub}|, and |V{sub td}| of the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix, and the angles {beta} and {alpha} of the unitarity triangle. Most results presented here are using the full BABAR {Upsilon}(4S) data set.

  4. On the TFNS Subgrid Models for Liquid-Fueled Turbulent Combustion

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey; Wey, Thomas

    2014-01-01

    This paper describes the time-filtered Navier-Stokes (TFNS) approach capable of capturing unsteady flow structures important for turbulent mixing in the combustion chamber and two different subgrid models used to emulate the major processes occurring in the turbulence-chemistry interaction. These two subgrid models are termed as LEM-like model and EUPDF-like model (Eulerian probability density function), respectively. Two-phase turbulent combustion in a single-element lean-direct-injection (LDI) combustor is calculated by employing the TFNS/LEM-like approach as well as the TFNS/EUPDF-like approach. Results obtained from the TFNS approach employing these two different subgrid models are compared with each other, along with the experimental data, followed by more detailed comparison between the results of an updated calculation using the TFNS/LEM-like model and the experimental data.

  5. Coupling of a 3D Finite Element Model of Cardiac Ventricular Mechanics to Lumped Systems Models of the Systemic and Pulmonic Circulation

    PubMed Central

    Kerckhoffs, Roy C. P.; Neal, Maxwell L.; Gu, Quan; Bassingthwaighte, James B.; Omens, Jeff H.; McCulloch, Andrew D.

    2010-01-01

    In this study we present a novel, robust method to couple finite element (FE) models of cardiac mechanics to systems models of the circulation (CIRC), independent of cardiac phase. For each time step through a cardiac cycle, left and right ventricular pressures were calculated using ventricular compliances from the FE and CIRC models. These pressures served as boundary conditions in the FE and CIRC models. In succeeding steps, pressures were updated to minimize cavity volume error (FE minus CIRC volume) using Newton iterations. Coupling was achieved when a predefined criterion for the volume error was satisfied. Initial conditions for the multi-scale model were obtained by replacing the FE model with a varying elastance model, which takes into account direct ventricular interactions. Applying the coupling, a novel multi-scale model of the canine cardiovascular system was developed. Global hemodynamics and regional mechanics were calculated for multiple beats in two separate simulations with a left ventricular ischemic region and pulmonary artery constriction, respectively. After the interventions, global hemodynamics changed due to direct and indirect ventricular interactions, in agreement with previously published experimental results. The coupling method allows for simulations of multiple cardiac cycles for normal and pathophysiology, encompassing levels from cell to system. PMID:17111210

  6. Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service

    NASA Astrophysics Data System (ADS)

    Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.

    2016-06-01

    Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.

  7. A triangular prism solid and shell interactive mapping element for electromagnetic sheet metal forming process

    NASA Astrophysics Data System (ADS)

    Cui, Xiangyang; Li, She; Feng, Hui; Li, Guangyao

    2017-05-01

    In this paper, a novel triangular prism solid and shell interactive mapping element is proposed to solve the coupled magnetic-mechanical formulation in electromagnetic sheet metal forming process. A linear six-node "Triprism" element is firstly proposed for transient eddy current analysis in electromagnetic field. In present "Triprism" element, shape functions are given explicitly, and a cell-wise gradient smoothing operation is used to obtain the gradient matrices without evaluating derivatives of shape functions. In mechanical field analysis, a shear locking free triangular shell element is employed in internal force computation, and a data mapping method is developed to transfer the Lorentz force on solid into the external forces suffered by shell structure for dynamic elasto-plasticity deformation analysis. Based on the deformed triangular shell structure, a "Triprism" element generation rule is established for updated electromagnetic analysis, which means inter-transformation of meshes between the coupled fields can be performed automatically. In addition, the dynamic moving mesh is adopted for air mesh updating based on the deformation of sheet metal. A benchmark problem is carried out for confirming the accuracy of the proposed "Triprism" element in predicting flux density in electromagnetic field. Solutions of several EMF problems obtained by present work are compared with experiment results and those of traditional method, which are showing excellent performances of present interactive mapping element.

  8. DFN Modeling for the Safety Case of the Final Disposal of Spent Nuclear Fuel in Olkiluoto, Finland

    NASA Astrophysics Data System (ADS)

    Vanhanarkaus, O.

    2017-12-01

    Olkiluoto Island is a site in SW Finland chosen to host a deep geological repository for high-level nuclear waste generated by nuclear power plants of power companies TVO and Fortum. Posiva, a nuclear waste management organization, submitted a construction license application for the Olkiluoto repository to the Finnish government in 2012. A key component of the license application was an integrated geological, hydrological and biological description of the Olkiluoto site. After the safety case was reviewed in 2015 by the Radiation and Nuclear Safety Authority in Finland, Posiva was granted a construction license. Posiva is now preparing an updated safety case for the operating license application to be submitted in 2022, and an update of the discrete fracture network (DFN) model used for site characterization is part of that. The first step describing and modelling the network of fractures in the Olkiluoto bedrock was DFN model version 1 (2009), which presented an initial understanding of the relationships between rock fracturing and geology at the site and identified the important primary controls on fracturing. DFN model version 2 (2012) utilized new subsurface data from additional drillholes, tunnels and excavated underground facilities in ONKALO to better understand spatial variability of the geological controls on geological and hydrogeological fracture properties. DFN version 2 connected fracture geometric and hydraulic properties to distinct tectonic domains and to larger-scale hydraulically conductive fault zones. In the version 2 DFN model, geological and hydrogeological models were developed along separate parallel tracks. The version 3 (2017) DFN model for the Olkiluoto site integrates geological and hydrogeological elements into a single consistent model used for geological, rock mechanical, hydrogeological and hydrogeochemical studies. New elements in the version 3 DFN model include a stochastic description of fractures within Brittle Fault Zones (BFZ), integration of geological and hydrostructural interpretations of BFZ, greater use of 3D geological models to better constrain the spatial variability of fracturing and fractures using hydromechanical principles to account for material behavior and in-situ stresses.

  9. Metallic Fuels Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janney, Dawn E.; Papesch, Cynthia A.; Burkes, Douglas E.

    This is not a typical External Report--It is a Handbook. No Abstract is involved. This includes both Parts 1 and 2. The Metallic Fuels Handbook summarizes currently available information about phases and phase diagrams, heat capacity, thermal expansion, and thermal conductivity of elements and alloys in the U-Pu-Zr-Np-Am-La-Ce-Pr-Nd system. Although many sections are reviews and updates of material in previous versions of the Handbook [1, 2], this revision is the first to include alloys with four or more elements. In addition to presenting information about materials properties, the handbook attempts to provide information about how well each property is knownmore » and how much variation exists between measurements. Although it includes some results from models, its primary focus is experimental data.« less

  10. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    NASA Astrophysics Data System (ADS)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  11. Numerical model updating technique for structures using firefly algorithm

    NASA Astrophysics Data System (ADS)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  12. NASTRAN Modeling of Flight Test Components for UH-60A Airloads Program Test Configuration

    NASA Technical Reports Server (NTRS)

    Idosor, Florentino R.; Seible, Frieder

    1993-01-01

    Based upon the recommendations of the UH-60A Airloads Program Review Committee, work towards a NASTRAN remodeling effort has been conducted. This effort modeled and added the necessary structural/mass components to the existing UH-60A baseline NASTRAN model to reflect the addition of flight test components currently in place on the UH-60A Airloads Program Test Configuration used in NASA-Ames Research Center's Modern Technology Rotor Airloads Program. These components include necessary flight hardware such as instrument booms, movable ballast cart, equipment mounting racks, etc. Recent modeling revisions have also been included in the analyses to reflect the inclusion of new and updated primary and secondary structural components (i.e., tail rotor shaft service cover, tail rotor pylon) and improvements to the existing finite element mesh (i.e., revisions of material property estimates). Mode frequency and shape results have shown that components such as the Trimmable Ballast System baseplate and its respective payload ballast have caused a significant frequency change in a limited number of modes while only small percent changes in mode frequency are brought about with the addition of the other MTRAP flight components. With the addition of the MTRAP flight components, update of the primary and secondary structural model, and imposition of the final MTRAP weight distribution, modal results are computed representative of the 'best' model presently available.

  13. Real-time Probabilistic Covariance Tracking with Efficient Model Update

    DTIC Science & Technology

    2012-05-01

    NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed...feature points inside a given rectangular region R of F . The region R is represented by the d×d covariance matrix of the feature points C = 1 N − 1 N...i=1 (fi − µ)(fi − µ)T , where N is the number of pixels in the region R and µ is the mean of the feature points. The element (i, j) of C represents

  14. A Coupled Layerwise Analysis of the Thermopiezoelectric Response of Smart Composite Beams Beams

    NASA Technical Reports Server (NTRS)

    Lee, H.-J.; Saravanos, D. A.

    1995-01-01

    Thermal effects are incorporated into previously developed discrete layer mechanics for piezoelectric composite beam structures. The updated mechanics explicitly account for the complete coupled thermoelectromechanical response of smart composite beams. This unified representation leads to an inherent capability to model both the sensory and actuator responses of piezoelectric composite beams in a thermal environment. Finite element equations are developed and numerical results are presented to demonstrate the capability of the current formulation to represent the behavior of both sensory and active smart structures under thermal loadings.

  15. Structural Modeling of a Five-Meter Thin Film Inflatable Antenna/Concentrator With Rigidized Support Struts

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.

    2001-01-01

    Dynamic characterization of a non-rigidized thin film inflatable antenna/solar concentrator structure with rigidized composite support struts is described in detail. A two-step finite element modeling approach in MSC/NASTRAN is utilized, consisting of: (1) a nonlinear static pressurization procedure used to obtain the updated stiffness matrix, and (2) a modal "restart" eigen solution that uses the modified stiffness matrix. Unique problems encountered in modeling of this large 5-m lightweight inflatable are identified, including considerable difficulty in obtaining convergence in the nonlinear pressurization solution. It was found that the extremely thin polyimide film material (.001 in or I mil) presents tremendous problems in obtaining a converged solution when internal pressure loading is applied. It was concluded that the ratios of film thickness to other geometric dimensions such as torus cross-sectional and ring diameter and lenticular diameter are the critical parameters for convergence of the pressurization procedure. Comparison of finite element predictions for frequency and mode shapes with experimental results indicated reasonable agreement considering the complexity of the structure, the film-to-air interaction, and the nonlinear material properties of the film. It was also concluded that analysis should be done using different finite element to codes to determine if a more robust and stable solution can be obtained.

  16. Structural safety analysis based on seismic service conditions for butterfly valves in a nuclear power plant.

    PubMed

    Han, Sang-Uk; Ahn, Dae-Gyun; Lee, Myeong-Gon; Lee, Kwon-Hee; Han, Seung-Ho

    2014-01-01

    The structural integrity of valves that are used to control cooling waters in the primary coolant loop that prevents boiling within the reactor in a nuclear power plant must be capable of withstanding earthquakes or other dangerous situations. In this study, numerical analyses using a finite element method, that is, static and dynamic analyses according to the rigid or flexible characteristics of the dynamic properties of a 200A butterfly valve, were performed according to the KEPIC MFA. An experimental vibration test was also carried out in order to verify the results from the modal analysis, in which a validated finite element model was obtained via a model-updating method that considers changes in the in situ experimental data. By using a validated finite element model, the equivalent static load under SSE conditions stipulated by the KEPIC MFA gave a stress of 135 MPa that occurred at the connections of the stem and body. A larger stress of 183 MPa was induced when we used a CQC method with a design response spectrum that uses 2% damping ratio. These values were lower than the allowable strength of the materials used for manufacturing the butterfly valve, and, therefore, its structural safety met the KEPIC MFA requirements.

  17. Finite element analysis of stress transfer mechanism from matrix to the fiber in SWCN reinforced nanocomposites

    NASA Astrophysics Data System (ADS)

    Günay, E.

    2017-02-01

    This study defined as micromechanical finite element (FE) approach examining the stress transfer mechanism in single-walled carbon nanotube (SWCN) reinforced composites. In the modeling, 3D unit-cell method was evaluated. Carbon nanotube reinforced composites were modeled as three layers which comprises CNT, interface and matrix material. Firstly; matrix, fiber and interfacial materials all together considered as three layered cylindrical nanocomposite. Secondly, the cylindrical matrix material was assumed to be isotropic and also considered as a continuous medium. Then, fiber material was represented with zigzag type SWCNs. Finally, SWCN was combined with the elastic medium by using springs with different constants. In the FE modeling of SWCN reinforced composite model springs were modeled by using ANSYS spring damper element COMBIN14. The developed interfacial van der Waals interaction effects between the continuous matrix layer and the carbon nanotube fiber layer were simulated by applying these various spring stiffness values. In this study, the layered composite cylindrical FE model was presented as the equivalent mechanical properties of SWCN structures in terms of Young's modulus. The obtained results and literature values were presented and discussed. Figures, 16, 17, and 18 of the original article PDF file, as supplied to AIP Publishing, were affected by a PDF-processing error. Consequently, a solid diamond symbol appeared instead of a Greek tau on the y axis labels for these three figures. This article was updated on 17 March 2017 to correct the PDF-processing error, with the scientific content remaining unchanged.

  18. Inferring global upper-mantle shear attenuation structure by waveform tomography using the spectral element method

    NASA Astrophysics Data System (ADS)

    Karaoǧlu, Haydar; Romanowicz, Barbara

    2018-06-01

    We present a global upper-mantle shear wave attenuation model that is built through a hybrid full-waveform inversion algorithm applied to long-period waveforms, using the spectral element method for wavefield computations. Our inversion strategy is based on an iterative approach that involves the inversion for successive updates in the attenuation parameter (δ Q^{-1}_μ) and elastic parameters (isotropic velocity VS, and radial anisotropy parameter ξ) through a Gauss-Newton-type optimization scheme that employs envelope- and waveform-type misfit functionals for the two steps, respectively. We also include source and receiver terms in the inversion steps for attenuation structure. We conducted a total of eight iterations (six for attenuation and two for elastic structure), and one inversion for updates to source parameters. The starting model included the elastic part of the relatively high-resolution 3-D whole mantle seismic velocity model, SEMUCB-WM1, which served to account for elastic focusing effects. The data set is a subset of the three-component surface waveform data set, filtered between 400 and 60 s, that contributed to the construction of the whole-mantle tomographic model SEMUCB-WM1. We applied strict selection criteria to this data set for the attenuation iteration steps, and investigated the effect of attenuation crustal structure on the retrieved mantle attenuation structure. While a constant 1-D Qμ model with a constant value of 165 throughout the upper mantle was used as starting model for attenuation inversion, we were able to recover, in depth extent and strength, the high-attenuation zone present in the depth range 80-200 km. The final 3-D model, SEMUCB-UMQ, shows strong correlation with tectonic features down to 200-250 km depth, with low attenuation beneath the cratons, stable parts of continents and regions of old oceanic crust, and high attenuation along mid-ocean ridges and backarcs. Below 250 km, we observe strong attenuation in the southwestern Pacific and eastern Africa, while low attenuation zones fade beneath most of the cratons. The strong negative correlation of Q^{-1}_μ and VS anomalies at shallow upper-mantle depths points to a common dominant origin for the two, likely due to variations in thermal structure. A comparison with two other global upper-mantle attenuation models shows promising consistency. As we updated the elastic 3-D model in alternate iterations, we found that the VS part of the model was stable, while the ξ structure evolution was more pronounced, indicating that it may be important to include 3-D attenuation effects when inverting for ξ, possibly due to the influence of dispersion corrections on this less well-constrained parameter.

  19. Specific Needs for Updating Educational Experiences as Reported by Instructors of Electronics in Industrial Education Departments of Colleges and Universities.

    ERIC Educational Resources Information Center

    Slater, John Breisch

    A survey of 133 industrial education electronics instructors in 115 4-year colleges and universities in 40 states was conducted to determine the expressed needs of college level electronics instructors for updating educational experiences in specified elements. An attempt was also made to determine the nature of course content needed in graduate…

  20. Atomic Weights of the Elements 1999

    NASA Astrophysics Data System (ADS)

    Coplen, T. B.

    2001-05-01

    The biennial review of atomic-weight, Ar(E), determinations and other cognate data have resulted in changes for the standard atomic weights of the following elements: from to nitrogen 14.006 74±0.000 07¯r 14.0067±0.0002¯ sulfur 32.066±0.006 32.065±0.005 chlorine 35.4527±0.0009 35.453±0.002 germanium 72.61±0.02 72.64±0.01 xenon 131.29±0.02 131.293±0.006 erbium 167.26±0.03 167.259±0.003 uranium 238.0289±0.0001 238.028 91±0.000 03 Presented are updated tables of the standard atomic weights and their uncertainties estimated by combining experimental uncertainties and terrestrial variabilities. In addition, this report again contains an updated table of relative atomic mass values and half-lives of selected radioisotopes. Changes in the evaluated isotopic abundance values from those published in 1997 are so minor that an updated list will not be published for the year 1999. Many elements have a different isotopic composition in some nonterrestrial materials. Some recent data on parent nuclides that might affect isotopic abundances or atomic-weight values are included in this report for the information of the interested scientific community.

  1. A new frequency matching technique for FRF-based model updating

    NASA Astrophysics Data System (ADS)

    Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng

    2017-05-01

    Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.

  2. Timing Interactions in Social Simulations: The Voter Model

    NASA Astrophysics Data System (ADS)

    Fernández-Gracia, Juan; Eguíluz, Víctor M.; Miguel, Maxi San

    The recent availability of huge high resolution datasets on human activities has revealed the heavy-tailed nature of the interevent time distributions. In social simulations of interacting agents the standard approach has been to use Poisson processes to update the state of the agents, which gives rise to very homogeneous activity patterns with a well defined characteristic interevent time. As a paradigmatic opinion model we investigate the voter model and review the standard update rules and propose two new update rules which are able to account for heterogeneous activity patterns. For the new update rules each node gets updated with a probability that depends on the time since the last event of the node, where an event can be an update attempt (exogenous update) or a change of state (endogenous update). We find that both update rules can give rise to power law interevent time distributions, although the endogenous one more robustly. Apart from that for the exogenous update rule and the standard update rules the voter model does not reach consensus in the infinite size limit, while for the endogenous update there exist a coarsening process that drives the system toward consensus configurations.

  3. Persuasive System Design Does Matter: A Systematic Review of Adherence to Web-Based Interventions

    PubMed Central

    Kok, Robin N; Ossebaard, Hans C; Van Gemert-Pijnen, Julia EWC

    2012-01-01

    Background Although web-based interventions for promoting health and health-related behavior can be effective, poor adherence is a common issue that needs to be addressed. Technology as a means to communicate the content in web-based interventions has been neglected in research. Indeed, technology is often seen as a black-box, a mere tool that has no effect or value and serves only as a vehicle to deliver intervention content. In this paper we examine technology from a holistic perspective. We see it as a vital and inseparable aspect of web-based interventions to help explain and understand adherence. Objective This study aims to review the literature on web-based health interventions to investigate whether intervention characteristics and persuasive design affect adherence to a web-based intervention. Methods We conducted a systematic review of studies into web-based health interventions. Per intervention, intervention characteristics, persuasive technology elements and adherence were coded. We performed a multiple regression analysis to investigate whether these variables could predict adherence. Results We included 101 articles on 83 interventions. The typical web-based intervention is meant to be used once a week, is modular in set-up, is updated once a week, lasts for 10 weeks, includes interaction with the system and a counselor and peers on the web, includes some persuasive technology elements, and about 50% of the participants adhere to the intervention. Regarding persuasive technology, we see that primary task support elements are most commonly employed (mean 2.9 out of a possible 7.0). Dialogue support and social support are less commonly employed (mean 1.5 and 1.2 out of a possible 7.0, respectively). When comparing the interventions of the different health care areas, we find significant differences in intended usage (p = .004), setup (p < .001), updates (p < .001), frequency of interaction with a counselor (p < .001), the system (p = .003) and peers (p = .017), duration (F = 6.068, p = .004), adherence (F = 4.833, p = .010) and the number of primary task support elements (F = 5.631, p = .005). Our final regression model explained 55% of the variance in adherence. In this model, a RCT study as opposed to an observational study, increased interaction with a counselor, more frequent intended usage, more frequent updates and more extensive employment of dialogue support significantly predicted better adherence. Conclusions Using intervention characteristics and persuasive technology elements, a substantial amount of variance in adherence can be explained. Although there are differences between health care areas on intervention characteristics, health care area per se does not predict adherence. Rather, the differences in technology and interaction predict adherence. The results of this study can be used to make an informed decision about how to design a web-based intervention to which patients are more likely to adhere. PMID:23151820

  4. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  5. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  6. The New Italian Seismic Hazard Model

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.

    2017-12-01

    In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme of the different components of the PSHA model that has been built through three different independent steps: a formal experts' elicitation, the outcomes of the testing phase, and the correlation between the outcomes. Finally, we explore through different techniques the influence on seismic hazard of the declustering procedure.

  7. A review of statistical updating methods for clinical prediction models.

    PubMed

    Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew

    2018-01-01

    A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.

  8. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  9. Demonstration of the DSST State Transition Matrix Time-Update Properties Using the Linux GTDS Program

    DTIC Science & Technology

    2011-09-01

    by a single mean equinoctial element set . EGP Orbit Determination Test Cases Rev 25 14 All of the EGP test cases employ the same observation...the non-singular equinoctial mean elements is more linear and this has positive implications for orbit determination processes based on the semi...by a single mean equinoctial element set . 5. CONCLUSIONS The GTDS Semi-analytical Satellite Theory (DSST) architecture has been extended to

  10. Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.

    PubMed

    Fernández-Gracia, J; Eguíluz, V M; San Miguel, M

    2011-07-01

    We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.

  11. A STUDY ON THE HIERARCHY OF MANAGEMENT ELEMENTS

    NASA Astrophysics Data System (ADS)

    Suzuki, Nobuyuki; Watanabe, Tadashi

    Compared to the late 20th century, the Japanese construction industry has drastically changed its business methodology, outlook and approach in response to global issues and the incredible advances in technology. Such influences, non-exhaustively include the; WTO Government procurement agreement, updating conditions of tendering and contracting, client demands for cost reduction and the rapid penetration of ICT (Information and Communication Technology) into modern society. These days, the significance of controlling Quality, Cost and Time (the so-called QCT) has been recognized as an eternal-triangle by almost all countries, Government organizations and the private sector. However, as the construction industry is exposed to , and influenced by, more and more internal and external dynamic factors, continued reliance on managing and controlling QCT elements on their own is no longer adequate in meeting the growing demands and expectations, and as such control of additional management elements is now essential to avoid problems, or minimize their potential impacts should they occur. This paper utilizes the results of a survey carried out amongst construction managers and consultants in Japan and overseas to develop a spatial network that defines the interaction of management factors as a weighted graphical model. The calculated closeness centrality index of the developed management network model is adopted to identify the initialelement hierarchy, which is then further analyzed using the minimum distance of independent relationships of management elements (Warshall-Floyd algorism methodology), to identify the optimum potential hierarchy for effective construction management. A key result of the analysis is the significance of "Human Resource" in the construction industry management element hierarchy alongside the traditional QCT elements.

  12. Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2015-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.

  13. Status Report on NEAMS System Analysis Module Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, R.; Fanning, T. H.; Sumner, T.

    2015-12-01

    Under the Reactor Product Line (RPL) of DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, an advanced SFR System Analysis Module (SAM) is being developed at Argonne National Laboratory. The goal of the SAM development is to provide fast-running, improved-fidelity, whole-plant transient analyses capabilities. SAM utilizes an object-oriented application framework MOOSE), and its underlying meshing and finite-element library libMesh, as well as linear and non-linear solvers PETSc, to leverage modern advanced software environments and numerical methods. It also incorporates advances in physical and empirical models and seeks closure models based on information from high-fidelity simulations and experiments. This reportmore » provides an update on the SAM development, and summarizes the activities performed in FY15 and the first quarter of FY16. The tasks include: (1) implement the support of 2nd-order finite elements in SAM components for improved accuracy and computational efficiency; (2) improve the conjugate heat transfer modeling and develop pseudo 3-D full-core reactor heat transfer capabilities; (3) perform verification and validation tests as well as demonstration simulations; (4) develop the coupling requirements for SAS4A/SASSYS-1 and SAM integration.« less

  14. Determination of orthotropic material properties by modal analysis

    NASA Astrophysics Data System (ADS)

    Lai, Junpeng

    The methodology for determination of orthotropic material properties in plane stress condition will be presented. It is applied to orthotropic laminated plates like printed wiring boards. The first part of the thesis will focus on theories and methodologies. The static beam model and vibratory plate model is presented. The methods are validated by operating a series of test on aluminum. In the static tests, deflection and two directions of strain are measured, thus four of the properties will be identified: Ex, Ey, nuxy, nuyx. Moving on to dynamic test, the first ten modes' resonance frequencies are obtained. The technique of modal analysis is adopted. The measured data is processed by FFT and analyzed by curve fitting to extract natural frequencies and mode shapes. With the last material property to be determined, a finite element method using ANSYS is applied. Along with the identified material properties in static tests, and proper initial guess of the unknown shear modulus, an iterative process creates finite element model and conducts modal analysis with the updating model. When the modal analysis result produced by ANSYS matches the natural frequencies acquired by dynamic test, the process will halt. Then we obtained the last material property in plane stress condition.

  15. A hybrid finite-difference and analytic element groundwater model

    USGS Publications Warehouse

    Haitjema, Henk M.; Feinstein, Daniel T.; Hunt, Randall J.; Gusyev, Maksym

    2010-01-01

    Regional finite-difference models tend to have large cell sizes, often on the order of 1–2 km on a side. Although the regional flow patterns in deeper formations may be adequately represented by such a model, the intricate surface water and groundwater interactions in the shallower layers are not. Several stream reaches and nearby wells may occur in a single cell, precluding any meaningful modeling of the surface water and groundwater interactions between the individual features. We propose to replace the upper MODFLOW layer or layers, in which the surface water and groundwater interactions occur, by an analytic element model (GFLOW) that does not employ a model grid; instead, it represents wells and surface waters directly by the use of point-sinks and line-sinks. For many practical cases it suffices to provide GFLOW with the vertical leakage rates calculated in the original coarse MODFLOW model in order to obtain a good representation of surface water and groundwater interactions. However, when the combined transmissivities in the deeper (MODFLOW) layers dominate, the accuracy of the GFLOW solution diminishes. For those cases, an iterative coupling procedure, whereby the leakages between the GFLOW and MODFLOW model are updated, appreciably improves the overall solution, albeit at considerable computational cost. The coupled GFLOW–MODFLOW model is applicable to relatively large areas, in many cases to the entire model domain, thus forming an attractive alternative to local grid refinement or inset models.

  16. Stratiform chromite deposit model

    USGS Publications Warehouse

    Schulte, Ruth F.; Taylor, Ryan D.; Piatak, Nadine M.; Seal, Robert R.

    2010-01-01

    Stratiform chromite deposits are of great economic importance, yet their origin and evolution remain highly debated. Layered igneous intrusions such as the Bushveld, Great Dyke, Kemi, and Stillwater Complexes, provide opportunities for studying magmatic differentiation processes and assimilation within the crust, as well as related ore-deposit formation. Chromite-rich seams within layered intrusions host the majority of the world's chromium reserves and may contain significant platinum-group-element (PGE) mineralization. This model of stratiform chromite deposits is part of an effort by the U.S. Geological Survey's Mineral Resources Program to update existing models and develop new descriptive mineral deposit models to supplement previously published models for use in mineral-resource and mineral-environmental assessments. The model focuses on features that may be common to all stratiform chromite deposits as a way to gain insight into the processes that gave rise to their emplacement and to the significant economic resources contained in them.

  17. Space-time VMS computation of wind-turbine rotor and tower aerodynamics

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; McIntyre, Spenser; Kostov, Nikolay; Kolesar, Ryan; Habluetzel, Casey

    2014-01-01

    We present the space-time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent flows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of flows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational flexibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  18. Space-Time VMS Computation of Wind-Turbine Rotor and Tower Aerodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Spenser W.

    This thesis is on the space{time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent ows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of ows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational exibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  19. Multi-Sensor Data Integration Using Deep Learning for Characterization of Defects in Steel Elements †

    PubMed Central

    2018-01-01

    Nowadays, there is a strong demand for inspection systems integrating both high sensitivity under various testing conditions and advanced processing allowing automatic identification of the examined object state and detection of threats. This paper presents the possibility of utilization of a magnetic multi-sensor matrix transducer for characterization of defected areas in steel elements and a deep learning based algorithm for integration of data and final identification of the object state. The transducer allows sensing of a magnetic vector in a single location in different directions. Thus, it enables detecting and characterizing any material changes that affect magnetic properties regardless of their orientation in reference to the scanning direction. To assess the general application capability of the system, steel elements with rectangular-shaped artificial defects were used. First, a database was constructed considering numerical and measurements results. A finite element method was used to run a simulation process and provide transducer signal patterns for different defect arrangements. Next, the algorithm integrating responses of the transducer collected in a single position was applied, and a convolutional neural network was used for implementation of the material state evaluation model. Then, validation of the obtained model was carried out. In this paper, the procedure for updating the evaluated local state, referring to the neighboring area results, is presented. Finally, the results and future perspective are discussed. PMID:29351215

  20. Probabilistic Prognosis of Non-Planar Fatigue Crack Growth

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Newman, John A.; Warner, James E.; Leser, William P.; Hochhalter, Jacob D.; Yuan, Fuh-Gwo

    2016-01-01

    Quantifying the uncertainty in model parameters for the purpose of damage prognosis can be accomplished utilizing Bayesian inference and damage diagnosis data from sources such as non-destructive evaluation or structural health monitoring. The number of samples required to solve the Bayesian inverse problem through common sampling techniques (e.g., Markov chain Monte Carlo) renders high-fidelity finite element-based damage growth models unusable due to prohibitive computation times. However, these types of models are often the only option when attempting to model complex damage growth in real-world structures. Here, a recently developed high-fidelity crack growth model is used which, when compared to finite element-based modeling, has demonstrated reductions in computation times of three orders of magnitude through the use of surrogate models and machine learning. The model is flexible in that only the expensive computation of the crack driving forces is replaced by the surrogate models, leaving the remaining parameters accessible for uncertainty quantification. A probabilistic prognosis framework incorporating this model is developed and demonstrated for non-planar crack growth in a modified, edge-notched, aluminum tensile specimen. Predictions of remaining useful life are made over time for five updates of the damage diagnosis data, and prognostic metrics are utilized to evaluate the performance of the prognostic framework. Challenges specific to the probabilistic prognosis of non-planar fatigue crack growth are highlighted and discussed in the context of the experimental results.

  1. An Update on Improvements to NiCE Support for RELAP-7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCaskey, Alex; Wojtowicz, Anna; Deyton, Jordan H.

    The Multiphysics Object-Oriented Simulation Environment (MOOSE) is a framework that facilitates the development of applications that rely on finite-element analysis to solve a coupled, nonlinear system of partial differential equations. RELAP-7 represents an update to the venerable RELAP-5 simulator that is built upon this framework and attempts to model the balance-of-plant concerns in a full nuclear plant. This report details the continued support and integration of RELAP-7 and the NEAMS Integrated Computational Environment (NiCE). RELAP-7 is fully supported by the NiCE due to on-going work to tightly integrate NiCE with the MOOSE framework, and subsequently the applications built upon it.more » NiCE development throughout the first quarter of FY15 has focused on improvements, bug fixes, and feature additions to existing MOOSE-based application support. Specifically, this report will focus on improvements to the NiCE MOOSE Model Builder, the MOOSE application job launcher, and the 3D Nuclear Plant Viewer. This report also includes a comprehensive tutorial that guides RELAP-7 users through the basic NiCE workflow: from input generation and 3D Plant modeling, to massively parallel job launch and post-simulation data visualization.« less

  2. Vectorized schemes for conical potential flow using the artificial density method

    NASA Technical Reports Server (NTRS)

    Bradley, P. F.; Dwoyer, D. L.; South, J. C., Jr.; Keen, J. M.

    1984-01-01

    A method is developed to determine solutions to the full-potential equation for steady supersonic conical flow using the artificial density method. Various update schemes used generally for transonic potential solutions are investigated. The schemes are compared for speed and robustness. All versions of the computer code have been vectorized and are currently running on the CYBER-203 computer. The update schemes are vectorized, where possible, either fully (explicit schemes) or partially (implicit schemes). Since each version of the code differs only by the update scheme and elements other than the update scheme are completely vectorizable, comparisons of computational effort and convergence rate among schemes are a measure of the specific scheme's performance. Results are presented for circular and elliptical cones at angle of attack for subcritical and supercritical crossflows.

  3. CPU-GPU mixed implementation of virtual node method for real-time interactive cutting of deformable objects using OpenCL.

    PubMed

    Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan

    2015-09-01

    Surgical simulators need to simulate interactive cutting of deformable objects in real time. The goal of this work was to design an interactive cutting algorithm that eliminates traditional cutting state classification and can work simultaneously with real-time GPU-accelerated deformation without affecting its numerical stability. A modified virtual node method for cutting is proposed. Deformable object is modeled as a real tetrahedral mesh embedded in a virtual tetrahedral mesh, and the former is used for graphics rendering and collision, while the latter is used for deformation. Cutting algorithm first subdivides real tetrahedrons to eliminate all face and edge intersections, then splits faces, edges and vertices along cutting tool trajectory to form cut surfaces. Next virtual tetrahedrons containing more than one connected real tetrahedral fragments are duplicated, and connectivity between virtual tetrahedrons is updated. Finally, embedding relationship between real and virtual tetrahedral meshes is updated. Co-rotational linear finite element method is used for deformation. Cutting and collision are processed by CPU, while deformation is carried out by GPU using OpenCL. Efficiency of GPU-accelerated deformation algorithm was tested using block models with varying numbers of tetrahedrons. Effectiveness of our cutting algorithm under multiple cuts and self-intersecting cuts was tested using a block model and a cylinder model. Cutting of a more complex liver model was performed, and detailed performance characteristics of cutting, deformation and collision were measured and analyzed. Our cutting algorithm can produce continuous cut surfaces when traditional minimal element creation algorithm fails. Our GPU-accelerated deformation algorithm remains stable with constant time step under multiple arbitrary cuts and works on both NVIDIA and AMD GPUs. GPU-CPU speed ratio can be as high as 10 for models with 80,000 tetrahedrons. Forty to sixty percent real-time performance and 100-200 Hz simulation rate are achieved for the liver model with 3,101 tetrahedrons. Major bottlenecks for simulation efficiency are cutting, collision processing and CPU-GPU data transfer. Future work needs to improve on these areas.

  4. 42 CFR 82.31 - How can the public recommend changes to scientific elements underlying the dose reconstruction...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 1 2011-10-01 2011-10-01 false How can the public recommend changes to scientific... ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Updating the Scientific Elements Underlying Dose Reconstructions § 82.31 How...

  5. 42 CFR 82.31 - How can the public recommend changes to scientific elements underlying the dose reconstruction...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false How can the public recommend changes to scientific... ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Updating the Scientific Elements Underlying Dose Reconstructions § 82.31 How...

  6. 42 CFR 82.31 - How can the public recommend changes to scientific elements underlying the dose reconstruction...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false How can the public recommend changes to scientific... ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Updating the Scientific Elements Underlying Dose Reconstructions § 82.31 How...

  7. 42 CFR 82.31 - How can the public recommend changes to scientific elements underlying the dose reconstruction...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false How can the public recommend changes to scientific... ACTIVITIES METHODS FOR CONDUCTING DOSE RECONSTRUCTION UNDER THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Updating the Scientific Elements Underlying Dose Reconstructions § 82.31 How...

  8. Scale-free networks as an epiphenomenon of memory

    NASA Astrophysics Data System (ADS)

    Caravelli, F.; Hamma, A.; Di Ventra, M.

    2015-01-01

    Many realistic networks are scale free, with small characteristic path lengths, high clustering, and power law in their degree distribution. They can be obtained by dynamical networks in which a preferential attachment process takes place. However, this mechanism is non-local, in the sense that it requires knowledge of the whole graph in order for the graph to be updated. Instead, if preferential attachment and realistic networks occur in physical systems, these features need to emerge from a local model. In this paper, we propose a local model and show that a possible ingredient (which is often underrated) for obtaining scale-free networks with local rules is memory. Such a model can be realised in solid-state circuits, using non-linear passive elements with memory such as memristors, and thus can be tested experimentally.

  9. Trends in hospital labor and total factor productivity, 1981-86

    PubMed Central

    Cromwell, Jerry; Pope, Gregory C.

    1989-01-01

    The per-case payment rates of Medicare's prospective payment system are annually updated. As one element of the update factor, Congress required consideration of changes in hospital productivity. In this article, calculations of annual changes in labor and total factor productivity during 1981-86 of hospitals eligible for prospective payment are presented using several output and input variants. Generally, productivity has declined since 1980, although the rates of decline have slowed since prospective payment implementation. According to the series of analyses most relevant for policy, significant hospital productivity gains occurred during 1983-86. This may justify a lower update factor. PMID:10313278

  10. Efficient Synthesis of Network Updates

    DTIC Science & Technology

    2015-06-17

    model include switches S i , links L j , and a single controller element C, and a network N is a tuple containing these. Each switch S i is encoded as a...and the ports they should be forwarded to respec- tively. Each link L j is represented by a record consisting of two locations loc and loc0 and a list...the union of multisets m1 and m2. We write [x] for a singleton list, and l1@l2 for concatenation of l1 and l2. Each transition N o ! N 0 is anno - tated

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeung, Yu-Hong; Pothen, Alex; Halappanavar, Mahantesh

    We present an augmented matrix approach to update the solution to a linear system of equations when the coefficient matrix is modified by a few elements within a principal submatrix. This problem arises in the dynamic security analysis of a power grid, where operators need to performmore » $N-x$ contingency analysis, i.e., determine the state of the system when up to $x$ links from $N$ fail. Our algorithms augment the coefficient matrix to account for the changes in it, and then compute the solution to the augmented system without refactoring the modified matrix. We provide two algorithms, a direct method, and a hybrid direct-iterative method for solving the augmented system. We also exploit the sparsity of the matrices and vectors to accelerate the overall computation. Our algorithms are compared on three power grids with PARDISO, a parallel direct solver, and CHOLMOD, a direct solver with the ability to modify the Cholesky factors of the coefficient matrix. We show that our augmented algorithms outperform PARDISO (by two orders of magnitude), and CHOLMOD (by a factor of up to 5). Further, our algorithms scale better than CHOLMOD as the number of elements updated increases. The solutions are computed with high accuracy. Our algorithms are capable of computing $N-x$ contingency analysis on a $778K$ bus grid, updating a solution with $x=20$ elements in $$1.6 \\times 10^{-2}$$ seconds on an Intel Xeon processor.« less

  12. Tangle-Free Finite Element Mesh Motion for Ablation Problems

    NASA Technical Reports Server (NTRS)

    Droba, Justin

    2016-01-01

    Mesh motion is the process by which a computational domain is updated in time to reflect physical changes in the material the domain represents. Such a technique is needed in the study of the thermal response of ablative materials, which erode when strong heating is applied to the boundary. Traditionally, the thermal solver is coupled with a linear elastic or biharmonic system whose sole purpose is to update mesh node locations in response to altering boundary heating. Simple mesh motion algorithms rely on boundary surface normals. In such schemes, evolution in time will eventually cause the mesh to intersect and "tangle" with itself, causing failure. Furthermore, such schemes are greatly limited in the problems geometries on which they will be successful. This paper presents a comprehensive and sophisticated scheme that tailors the directions of motion based on context. By choosing directions for each node smartly, the inevitable tangle can be completely avoided and mesh motion on complex geometries can be modeled accurately.

  13. Real-time simulation of contact and cutting of heterogeneous soft-tissues.

    PubMed

    Courtecuisse, Hadrien; Allard, Jérémie; Kerfriden, Pierre; Bordas, Stéphane P A; Cotin, Stéphane; Duriez, Christian

    2014-02-01

    This paper presents a numerical method for interactive (real-time) simulations, which considerably improves the accuracy of the response of heterogeneous soft-tissue models undergoing contact, cutting and other topological changes. We provide an integrated methodology able to deal both with the ill-conditioning issues associated with material heterogeneities, contact boundary conditions which are one of the main sources of inaccuracies, and cutting which is one of the most challenging issues in interactive simulations. Our approach is based on an implicit time integration of a non-linear finite element model. To enable real-time computations, we propose a new preconditioning technique, based on an asynchronous update at low frequency. The preconditioner is not only used to improve the computation of the deformation of the tissues, but also to simulate the contact response of homogeneous and heterogeneous bodies with the same accuracy. We also address the problem of cutting the heterogeneous structures and propose a method to update the preconditioner according to the topological modifications. Finally, we apply our approach to three challenging demonstrators: (i) a simulation of cataract surgery (ii) a simulation of laparoscopic hepatectomy (iii) a brain tumor surgery. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Updated Bs-mixing constraints on new physics models for b →s ℓ+ℓ- anomalies

    NASA Astrophysics Data System (ADS)

    Di Luzio, Luca; Kirk, Matthew; Lenz, Alexander

    2018-05-01

    Many new physics models that explain the intriguing anomalies in the b -quark flavor sector are severely constrained by Bs mixing, for which the Standard Model prediction and experiment agreed well until recently. The most recent Flavour Lattice Averaging Group (FLAG) average of lattice results for the nonperturbative matrix elements points, however, in the direction of a small discrepancy in this observable Cabibbo-Kobayashi-Maskawa (CKM). Using up-to-date inputs from standard sources such as PDG, FLAG and one of the two leading CKM fitting groups to determine Δ MsSM, we find a severe reduction of the allowed parameter space of Z' and leptoquark models explaining the B anomalies. Remarkably, in the former case the upper bound on the Z' mass approaches dangerously close to the energy scales already probed by the LHC. We finally identify some model-building directions in order to alleviate the tension with Bs mixing.

  15. Conditions of Core Formation in the Early Earth: Single Stage or Heterogeneous Accretion?

    NASA Technical Reports Server (NTRS)

    Righter, Kevin

    2010-01-01

    Since approx.1990 high pressure and temperature (PT) experiments on metal-silicate systems have showed that partition coefficients [D(met/sil)] for siderophile (iron-loving) elements are much different than those measured at low PT conditions [1,2]. The high PT data have been used to argue for a magma ocean during growth of the early Earth [3,4]. In the ensuing decades there have been hundreds of new experiments carried out and published on a wide range of siderophile elements (> 80 experiments published for Ni, Co, Mo, W, P, Mn, V, Cr, Ga, Cu and Pd). At the same time several different models have been advanced to explain the siderophile elements in Earth's mantle: a) shallow depth magma ocean 25-30 GPa [3,5]; b) deep magma ocean; up to 50 GPa [6,7], and c) early reduced and later oxidized magma ocean [8,9]. Some studies have drawn conclusions based on a small subset of siderophile elements, or a set of elements that provides little leverage on the big picture (like slightly siderophile elements), and no single study has attempted to quantitatively explain more than 5 elements at a time. The purpose of this abstract is to identify issues that have lead to a difference in interpretation, and to present updated predictive expressions based on new experimental data. The resulting expressions will be applied to the siderophile element depletions in Earth's upper mantle.

  16. Improved Modeling of Open Waveguide Aperture Radiators for use in Conformal Antenna Arrays

    NASA Astrophysics Data System (ADS)

    Nelson, Gregory James

    Open waveguide apertures have been used as radiating elements in conformal arrays. Individual radiating element model patterns are used in constructing overall array models. The existing models for these aperture radiating elements may not accurately predict the array pattern for TEM waves which are not on boresight for each radiating element. In particular, surrounding structures can affect the far field patterns of these apertures, which ultimately affects the overall array pattern. New models of open waveguide apertures are developed here with the goal of accounting for the surrounding structure effects on the aperture far field patterns such that the new models make accurate pattern predictions. These aperture patterns (both E plane and H plane) are measured in an anechoic chamber and the manner in which they deviate from existing model patterns are studied. Using these measurements as a basis, existing models for both E and H planes are updated with new factors and terms which allow the prediction of far field open waveguide aperture patterns with improved accuracy. These new and improved individual radiator models are then used to predict overall conformal array patterns. Arrays of open waveguide apertures are constructed and measured in a similar fashion to the individual aperture measurements. These measured array patterns are compared with the newly modeled array patterns to verify the improved accuracy of the new models as compared with the performance of existing models in making array far field pattern predictions. The array pattern lobe characteristics are then studied for predicting fully circularly conformal arrays of varying radii. The lobe metrics that are tracked are angular location and magnitude as the radii of the conformal arrays are varied. A constructed, measured array that is close to conforming to a circular surface is compared with a fully circularly conformal modeled array pattern prediction, with the predicted lobe angular locations and magnitudes tracked, plotted and tabulated. The close match between the patterns of the measured array and the modeled circularly conformal array verifies the validity of the modeled circularly conformal array pattern predictions.

  17. Human spaceflight and space adaptations: Computational simulation of gravitational unloading on the spine

    NASA Astrophysics Data System (ADS)

    Townsend, Molly T.; Sarigul-Klijn, Nesrin

    2018-04-01

    Living in reduced gravitational environments for a prolonged duration such, as a fly by mission to Mars or an extended stay at the international space station, affects the human body - in particular, the spine. As the spine adapts to spaceflight, morphological and physiological changes cause the mechanical integrity of the spinal column to be compromised, potentially endangering internal organs, nervous health, and human body mechanical function. Therefore, a high fidelity computational model and simulation of the whole human spine was created and validated for the purpose of investigating the mechanical integrity of the spine in crew members during exploratory space missions. A spaceflight exposed spine has been developed through the adaptation of a three-dimensional nonlinear finite element model with the updated Lagrangian formulation of a healthy ground-based human spine in vivo. Simulation of the porohyperelastic response of the intervertebral disc to mechanical unloading resulted in a model capable of accurately predicting spinal swelling/lengthening, spinal motion, and internal stress distribution. The curvature of this space adaptation exposed spine model was compared to a control terrestrial-based finite element model, indicating how the shape changed. Finally, the potential of injury sites to crew members are predicted for a typical 9 day mission.

  18. A Deep Stochastic Model for Detecting Community in Complex Networks

    NASA Astrophysics Data System (ADS)

    Fu, Jingcheng; Wu, Jianliang

    2017-01-01

    Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.

  19. VKCDB: voltage-gated K+ channel database updated and upgraded.

    PubMed

    Gallin, Warren J; Boutet, Patrick A

    2011-01-01

    The Voltage-gated K(+) Channel DataBase (VKCDB) (http://vkcdb.biology.ualberta.ca) makes a comprehensive set of sequence data readily available for phylogenetic and comparative analysis. The current update contains 2063 entries for full-length or nearly full-length unique channel sequences from Bacteria (477), Archaea (18) and Eukaryotes (1568), an increase from 346 solely eukaryotic entries in the original release. In addition to protein sequences for channels, corresponding nucleotide sequences of the open reading frames corresponding to the amino acid sequences are now available and can be extracted in parallel with sets of protein sequences. Channels are categorized into subfamilies by phylogenetic analysis and by using hidden Markov model analyses. Although the raw database contains a number of fragmentary, duplicated, obsolete and non-channel sequences that were collected in early steps of data collection, the web interface will only return entries that have been validated as likely K(+) channels. The retrieval function of the web interface allows retrieval of entries that contain a substantial fraction of the core structural elements of VKCs, fragmentary entries, or both. The full database can be downloaded as either a MySQL dump or as an XML dump from the web site. We have now implemented automated updates at quarterly intervals.

  20. DYCAST: A finite element program for the crash analysis of structures

    NASA Technical Reports Server (NTRS)

    Pifko, A. B.; Winter, R.; Ogilvie, P.

    1987-01-01

    DYCAST is a nonlinear structural dynamic finite element computer code developed for crash simulation. The element library contains stringers, beams, membrane skin triangles, plate bending triangles and spring elements. Changing stiffnesses in the structure are accounted for by plasticity and very large deflections. Material nonlinearities are accommodated by one of three options: elastic-perfectly plastic, elastic-linear hardening plastic, or elastic-nonlinear hardening plastic of the Ramberg-Osgood type. Geometric nonlinearities are handled in an updated Lagrangian formulation by reforming the structure into its deformed shape after small time increments while accumulating deformations, strains, and forces. The nonlinearities due to combined loadings are maintained, and stiffness variation due to structural failures are computed. Numerical time integrators available are fixed-step central difference, modified Adams, Newmark-beta, and Wilson-theta. The last three have a variable time step capability, which is controlled internally by a solution convergence error measure. Other features include: multiple time-load history tables to subject the structure to time dependent loading; gravity loading; initial pitch, roll, yaw, and translation of the structural model with respect to the global system; a bandwidth optimizer as a pre-processor; and deformed plots and graphics as post-processors.

  1. Prediction-error variance in Bayesian model updating: a comparative study

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.

  2. Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-09-01

    Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.

  3. Assessing the performance of eight real-time updating models and procedures for the Brosna River

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.

    2005-10-01

    The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.

  4. Proceedings of the Space Surveillance Workshop (11th) Held in Lexington, Massachusetts on 30 March-1 April 1993. Volume 1

    DTIC Science & Technology

    1993-04-01

    are so close together, there is a great deal of mistagged metric data from the SPACETRACK sensors on these objects. The resulting orbital element sets ...including an attempt to combine U.S. Space Command element sets for each Lageos-2 related object in orbit with DSN angle data to determine the actual...Predict error at next observation -Maintain track to minimize reacquistion load -Estimate orbital element sets -Update time for next observation

  5. An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry

    NASA Astrophysics Data System (ADS)

    Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul

    2013-12-01

    The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.

  6. GALACTIC CHEMICAL EVOLUTION: THE IMPACT OF THE {sup 13}C-POCKET STRUCTURE ON THE s -PROCESS DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisterzo, S.; Travaglio, C.; Wiescher, M.

    2017-01-20

    The solar s -process abundances have been analyzed in the framework of a Galactic Chemical Evolution (GCE) model. The aim of this work is to implement the study by Bisterzo et al., who investigated the effect of one of the major uncertainties of asymptotic giant branch (AGB) yields, the internal structure of the {sup 13}C pocket. We present GCE predictions of s -process elements computed with additional tests in the light of suggestions provided in recent publications. The analysis is extended to different metallicities, by comparing GCE results and updated spectroscopic observations of unevolved field stars. We verify that themore » GCE predictions obtained with different tests may represent, on average, the evolution of selected neutron-capture elements in the Galaxy. The impact of an additional weak s -process contribution from fast-rotating massive stars is also explored.« less

  7. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  8. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.

    2016-01-01

    Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  9. Image updating for brain deformation compensation in tumor resection

    NASA Astrophysics Data System (ADS)

    Fan, Xiaoyao; Ji, Songbai; Olson, Jonathan D.; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2016-03-01

    Preoperative magnetic resonance images (pMR) are typically used for intraoperative guidance in image-guided neurosurgery, the accuracy of which can be significantly compromised by brain deformation. Biomechanical finite element models (FEM) have been developed to estimate whole-brain deformation and produce model-updated MR (uMR) that compensates for brain deformation at different surgical stages. Early stages of surgery, such as after craniotomy and after dural opening, have been well studied, whereas later stages after tumor resection begins remain challenging. In this paper, we present a method to simulate tumor resection by incorporating data from intraoperative stereovision (iSV). The amount of tissue resection was estimated from iSV using a "trial-and-error" approach, and the cortical shift was measured from iSV through a surface registration method using projected images and an optical flow (OF) motion tracking algorithm. The measured displacements were employed to drive the biomechanical brain deformation model, and the estimated whole-brain deformation was subsequently used to deform pMR and produce uMR. We illustrate the method using one patient example. The results show that the uMR aligned well with iSV and the overall misfit between model estimates and measured displacements was 1.46 mm. The overall computational time was ~5 min, including iSV image acquisition after resection, surface registration, modeling, and image warping, with minimal interruption to the surgical flow. Furthermore, we compare uMR against intraoperative MR (iMR) that was acquired following iSV acquisition.

  10. Planned updates and refinements to the central valley hydrologic model, with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    USGS Publications Warehouse

    Faunt, C.C.; Hanson, R.T.; Martin, P.; Schmid, W.

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence. ?? 2011 ASCE.

  11. Planned updates and refinements to the Central Valley hydrologic model with an emphasis on improving the simulation of land subsidence in the San Joaquin Valley

    USGS Publications Warehouse

    Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang

    2011-01-01

    California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.

  12. Deflection Analysis of the Space Shuttle External Tank Door Drive Mechanism

    NASA Technical Reports Server (NTRS)

    Tosto, Michael A.; Trieu, Bo C.; Evernden, Brent A.; Hope, Drew J.; Wong, Kenneth A.; Lindberg, Robert E.

    2008-01-01

    Upon observing an abnormal closure of the Space Shuttle s External Tank Doors (ETD), a dynamic model was created in MSC/ADAMS to conduct deflection analyses of the Door Drive Mechanism (DDM). For a similar analysis, the traditional approach would be to construct a full finite element model of the mechanism. The purpose of this paper is to describe an alternative approach that models the flexibility of the DDM using a lumped parameter approximation to capture the compliance of individual parts within the drive linkage. This approach allows for rapid construction of a dynamic model in a time-critical setting, while still retaining the appropriate equivalent stiffness of each linkage component. As a validation of these equivalent stiffnesses, finite element analysis (FEA) was used to iteratively update the model towards convergence. Following this analysis, deflections recovered from the dynamic model can be used to calculate stress and classify each component s deformation as either elastic or plastic. Based on the modeling assumptions used in this analysis and the maximum input forcing condition, two components in the DDM show a factor of safety less than or equal to 0.5. However, to accurately evaluate the induced stresses, additional mechanism rigging information would be necessary to characterize the input forcing conditions. This information would also allow for the classification of stresses as either elastic or plastic.

  13. Collective Behaviors in Spatially Extended Systems with Local Interactions and Synchronous Updating

    NASA Astrophysics Data System (ADS)

    ChatÉ, H.; Manneville, P.

    1992-01-01

    Assessing the extent to which dynamical systems with many degrees of freedom can be described within a thermodynamics formalism is a problem that currently attracts much attention. In this context, synchronously updated regular lattices of identical, chaotic elements with local interactions are promising models for which statistical mechanics may be hoped to provide some insights. This article presents a large class of cellular automata rules and coupled map lattices of the above type in space dimensions d = 2 to 6.Such simple models can be approached by a mean-field approximation which usually reduces the dynamics to that of a map governing the evolution of some extensive density. While this approximation is exact in the d = infty limit, where macroscopic variables must display the time-dependent behavior of the mean-field map, basic intuition from equilibrium statistical mechanics rules out any such behavior in a low-dimensional systems, since it would involve the collective motion of locally disordered elements.The models studied are chosen to be as close as possible to mean-field conditions, i.e., rather high space dimension, large connectivity, and equal-weight coupling between sites. While the mean-field evolution is never observed, a new type of non-trivial collective behavior is found, at odds with the predictions of equilibrium statistical mechanics. Both in the cellular automata models and in the coupled map lattices, macroscopic variables frequently display a non-transient, time-dependent, low-dimensional dynamics emerging out of local disorder. Striking examples are period 3 cycles in two-state cellular automata and a Hopf bifurcation for a d = 5 lattice of coupled logistic maps. An extensive account of the phenomenology is given, including a catalog of behaviors, classification tables for the celular automata rules, and bifurcation diagrams for the coupled map lattices.The observed underlying dynamics is accompanied by an intrinsic quasi-Gaussian noise (stemming from the local disorder) which disappears in the infinite-size limit. The collective behaviors constitute a robust phenomenon, resisting external noise, small changes in the local dynamics, and modifications of the initial and boundary conditions. Synchronous updating, high space dimension and the regularity of connections are shown to be crucial ingredients in the subtle build-up of correlations giving rise to the collective motion. The discussion stresses the need for a theoretical understanding that neither equilibrium statistical mechanics nor higher-order mean-field approximations are able to provide.

  14. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  15. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  16. Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Michael; Elgowainy, Amgad; Han, Jeongwoo

    This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.

  17. Incorporation of an Energy Equation into a Pulsed Inductive Thruster Performance Model

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A.; Reneau, Jarred P.; Sankaran, Kameshwaran

    2011-01-01

    A model for pulsed inductive plasma acceleration containing an energy equation to account for the various sources and sinks in such devices is presented. The model consists of a set of circuit equations coupled to an equation of motion and energy equation for the plasma. The latter two equations are obtained for the plasma current sheet by treating it as a one-element finite volume, integrating the equations over that volume, and then matching known terms or quantities already calculated in the model to the resulting current sheet-averaged terms in the equations. Calculations showing the time-evolution of the various sources and sinks in the system are presented to demonstrate the efficacy of the model, with two separate resistivity models employed to show an example of how the plasma transport properties can affect the calculation. While neither resistivity model is fully accurate, the demonstration shows that it is possible within this modeling framework to time-accurately update various plasma parameters.

  18. Possibilities of the particle finite element method for fluid-soil-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Oñate, Eugenio; Celigueta, Miguel Angel; Idelsohn, Sergio R.; Salazar, Fernando; Suárez, Benjamín

    2011-09-01

    We present some developments in the particle finite element method (PFEM) for analysis of complex coupled problems in mechanics involving fluid-soil-structure interaction (FSSI). The PFEM uses an updated Lagrangian description to model the motion of nodes (particles) in both the fluid and the solid domains (the later including soil/rock and structures). A mesh connects the particles (nodes) defining the discretized domain where the governing equations for each of the constituent materials are solved as in the standard FEM. The stabilization for dealing with an incompressibility continuum is introduced via the finite calculus method. An incremental iterative scheme for the solution of the non linear transient coupled FSSI problem is described. The procedure to model frictional contact conditions and material erosion at fluid-solid and solid-solid interfaces is described. We present several examples of application of the PFEM to solve FSSI problems such as the motion of rocks by water streams, the erosion of a river bed adjacent to a bridge foundation, the stability of breakwaters and constructions sea waves and the study of landslides.

  19. Iterated reaction graphs: simulating complex Maillard reaction pathways.

    PubMed

    Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W

    2001-01-01

    This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.

  20. Luminance compensation for AMOLED displays using integrated MIS sensors

    NASA Astrophysics Data System (ADS)

    Vygranenko, Yuri; Fernandes, Miguel; Louro, Paula; Vieira, Manuela

    2017-05-01

    Active-matrix organic light-emitting diodes (AMOLEDs) are ideal for future TV applications due to their ability to faithfully reproduce real images. However, pixel luminance can be affected by instability of driver TFTs and aging effect in OLEDs. This paper reports on a pixel driver utilizing a metal-insulator-semiconductor (MIS) sensor for luminance control of the OLED element. In the proposed pixel architecture for bottom-emission AMOLEDs, the embedded MIS sensor shares the same layer stack with back-channel etched a Si:H TFTs to maintain the fabrication simplicity. The pixel design for a large-area HD display is presented. The external electronics performs image processing to modify incoming video using correction parameters for each pixel in the backplane, and also sensor data processing to update the correction parameters. The luminance adjusting algorithm is based on realistic models for pixel circuit elements to predict the relation between the programming voltage and OLED luminance. SPICE modeling of the sensing part of the backplane is performed to demonstrate its feasibility. Details on the pixel circuit functionality including the sensing and programming operations are also discussed.

  1. Assessment of the Structural Conditions of the San Clemente a Vomano Abbey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedettini, Francesco; Alaggio, Rocco; Fusco, Felice

    2008-07-08

    The simultaneous use of a Finite Element (FE) accurate modeling, dynamical tests, model updating and nonlinear analysis are used to describe the integrated approach used by the authors to assess the structural conditions and the seismic vulnerability of an historical masonry structure: the Abbey Church of San Clemente al Vomano, situated in the Notaresco territory (TE, Italy) commissioned by Ermengarda, daughter of the Emperor Ludovico II, and built at the end of IX century together with a monastery to host a monastic community. Dynamical tests 'in operational conditions' and modal identification have been used to perform the FE model validation.more » Both a simple and direct method as the kinematic analysis applied on meaningful sub-structures and a nonlinear 3D dynamic analysis conducted by using the FE model have been used to forecast the seismic performance of the Church.« less

  2. Study of the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape

    NASA Astrophysics Data System (ADS)

    Kaldunski, Pawel; Kukielka, Leon; Patyk, Radoslaw; Kulakowska, Agnieszka; Bohdal, Lukasz; Chodor, Jaroslaw; Kukielka, Krzysztof

    2018-05-01

    In this paper, the numerical analysis and computer simulation of deep drawing process has been presented. The incremental model of the process in updated Lagrangian formulation with the regard of the geometrical and physical nonlinearity has been evaluated by variational and the finite element methods. The Frederic Barlat's model taking into consideration the anisotropy of materials in three main and six tangents directions has been used. The work out application in Ansys/Ls-Dyna program allows complex step by step analysis and prognoses: the shape, dimensions and state stress and strains of drawpiece. The paper presents the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape, which includes: height, sheet thickness and maximum drawing force. The important factors determining the proper formation of drawpiece and the ways of their determination have been described.

  3. Polar View Snow Service- Operational Snow Cover Mapping for Downstream Runoff Modeling and Hydropower Predictions

    NASA Astrophysics Data System (ADS)

    Bach, Heike; Appel, Florian; Rust, Felix; Mauser, Wolfram

    2010-12-01

    Information on snow cover and snow properties are important for hydrology and runoff modelling. Frequent updates of snow cover observation, especially for areas characterized by short-term snow dynamics, can help to improve water balance and discharge calculations. Within the GMES service element Polar View, VISTA offers a snow mapping service for Central Europe since several years [1, 2]. We outline the use of this near-real- time product for hydrological applications in Alpine environment. In particular we discuss the integration of the Polar View product into a physically based hydrological model (PROMET). This allows not only the provision of snow equivalent values, but also enhances river runoff modelling and its use in hydropower energy yield prediction. The GMES snow products of Polar View are thus used in a downstream service for water resources management, providing information services for renewable energy suppliers and energy traders.

  4. Updates to the Demographic and Spatial Allocation Models to ...

    EPA Pesticide Factsheets

    EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing development scenarios up to 2100. This newest version includes updated population and land use data sets and addresses limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide (Final Report) describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.

  5. The Dfam database of repetitive DNA families.

    PubMed

    Hubley, Robert; Finn, Robert D; Clements, Jody; Eddy, Sean R; Jones, Thomas A; Bao, Weidong; Smit, Arian F A; Wheeler, Travis J

    2016-01-04

    Repetitive DNA, especially that due to transposable elements (TEs), makes up a large fraction of many genomes. Dfam is an open access database of families of repetitive DNA elements, in which each family is represented by a multiple sequence alignment and a profile hidden Markov model (HMM). The initial release of Dfam, featured in the 2013 NAR Database Issue, contained 1143 families of repetitive elements found in humans, and was used to produce more than 100 Mb of additional annotation of TE-derived regions in the human genome, with improved speed. Here, we describe recent advances, most notably expansion to 4150 total families including a comprehensive set of known repeat families from four new organisms (mouse, zebrafish, fly and nematode). We describe improvements to coverage, and to our methods for identifying and reducing false annotation. We also describe updates to the website interface. The Dfam website has moved to http://dfam.org. Seed alignments, profile HMMs, hit lists and other underlying data are available for download. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. A User''s Guide to the Zwikker-Kosten Transmission Line Code (ZKTL)

    NASA Technical Reports Server (NTRS)

    Kelly, J. J.; Abu-Khajeel, H.

    1997-01-01

    This user's guide documents updates to the Zwikker-Kosten Transmission Line Code (ZKTL). This code was developed for analyzing new liner concepts developed to provide increased sound absorption. Contiguous arrays of multi-degree-of-freedom (MDOF) liner elements serve as the model for these liner configurations, and Zwikker and Kosten's theory of sound propagation in channels is used to predict the surface impedance. Transmission matrices for the various liner elements incorporate both analytical and semi-empirical methods. This allows standard matrix techniques to be employed in the code to systematically calculate the composite impedance due to the individual liner elements. The ZKTL code consists of four independent subroutines: 1. Single channel impedance calculation - linear version (SCIC) 2. Single channel impedance calculation - nonlinear version (SCICNL) 3. Multi-channel, multi-segment, multi-layer impedance calculation - linear version (MCMSML) 4. Multi-channel, multi-segment, multi-layer impedance calculation - nonlinear version (MCMSMLNL) Detailed examples, comments, and explanations for each liner impedance computation module are included. Also contained in the guide are depictions of the interactive execution, input files and output files.

  7. Impeller deflection and modal finite element analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Nathan A.

    2013-10-01

    Deflections of an impeller due to centripetal forces are calculated using finite element analysis. The lateral, or out of plane, deflections are an important design consideration for this particular impeller because it incorporates an air bearing with critical gap tolerances. The target gap distance is approximately 10 microns at a rotational velocity of 2500 rpm. The centripetal forces acting on the impeller cause it deflect in a concave fashion, decreasing the initial gap distance as a function of radial position. This deflection is characterized for a previous and updated impeller design for comparative purposes. The impact of design options suchmore » as material selection, geometry dimensions, and operating rotational velocity are also explored, followed by a sensitivity study with these parameters bounded by specific design values. A modal analysis is also performed to calculate the impeller's natural frequencies which are desired to be avoided during operation. The finite element modeling techniques continue to be exercised by the impeller design team to address specific questions and evaluate conceptual designs, some of which are included in the Appendix.« less

  8. Quality Analysis of Open Street Map Data

    NASA Astrophysics Data System (ADS)

    Wang, M.; Li, Q.; Hu, Q.; Zhou, M.

    2013-05-01

    Crowd sourcing geographic data is an opensource geographic data which is contributed by lots of non-professionals and provided to the public. The typical crowd sourcing geographic data contains GPS track data like OpenStreetMap, collaborative map data like Wikimapia, social websites like Twitter and Facebook, POI signed by Jiepang user and so on. These data will provide canonical geographic information for pubic after treatment. As compared with conventional geographic data collection and update method, the crowd sourcing geographic data from the non-professional has characteristics or advantages of large data volume, high currency, abundance information and low cost and becomes a research hotspot of international geographic information science in the recent years. Large volume crowd sourcing geographic data with high currency provides a new solution for geospatial database updating while it need to solve the quality problem of crowd sourcing geographic data obtained from the non-professionals. In this paper, a quality analysis model for OpenStreetMap crowd sourcing geographic data is proposed. Firstly, a quality analysis framework is designed based on data characteristic analysis of OSM data. Secondly, a quality assessment model for OSM data by three different quality elements: completeness, thematic accuracy and positional accuracy is presented. Finally, take the OSM data of Wuhan for instance, the paper analyses and assesses the quality of OSM data with 2011 version of navigation map for reference. The result shows that the high-level roads and urban traffic network of OSM data has a high positional accuracy and completeness so that these OSM data can be used for updating of urban road network database.

  9. Theoretical Basis and Correct Explanation of the Periodic System: Review and Update

    ERIC Educational Resources Information Center

    Schwarz, W. H. Eugen; Rich, Ronald L.

    2010-01-01

    Long-standing questions on the theoretical basis of the periodic system have been answered in recent years. A specific type of periodicity is imposed on all elements by the main groups just before and after the noble gasses. The upper "n"p shells of these elements are unique because of their stabilized energies and the large gaps to the next…

  10. ARTS. Accountability Reporting and Tracking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, J.F.; Faccio, R.M.

    ARTS is a micro based prototype of the data elements, screens, and information processing rules that apply to the Accountability Reporting Program. The system focuses on the Accountability Event. The Accountability Event is an occurrence of incurring avoidable costs. The system must be able to CRUD (Create, Retrieve, Update, Delete) instances of the Accountability Event. Additionally, the system must provide for a review committee to update the `event record` with findings and determination information. Lastly, the system must provide for financial representatives to perform a cost reporting process.

  11. Development and applications of a flat triangular element for thin laminated shells

    NASA Astrophysics Data System (ADS)

    Mohan, P.

    Finite element analysis of thin laminated shells using a three-noded flat triangular shell element is presented. The flat shell element is obtained by combining the Discrete Kirchhoff Theory (DKT) plate bending element and a membrane element similar to the Allman element, but derived from the Linear Strain Triangular (LST) element. The major drawback of the DKT plate bending element is that the transverse displacement is not explicitly defined within the interior of the element. In the present research, free vibration analysis is performed both by using a lumped mass matrix and a so called consistent mass matrix, obtained by borrowing shape functions from an existing element, in order to compare the performance of the two methods. Several numerical examples are solved to demonstrate the accuracy of the formulation for both small and large rotation analysis of laminated plates and shells. The results are compared with those available in the existing literature and those obtained using the commercial finite element package ABAQUS and are found to be in good agreement. The element is employed for two main applications involving large flexible structures. The first application is the control of thermal deformations of a spherical mirror segment, which is a segment of a multi-segmented primary mirror used in a space telescope. The feasibility of controlling the surface distortions of the mirror segment due to arbitrary thermal fields, using discrete and distributed actuators, is studied. The second application is the analysis of an inflatable structure, being considered by the US Army for housing vehicles and personnel. The updated Lagrangian formulation of the flat shell element has been developed primarily for the nonlinear analysis of the tent structure, since such a structure is expected to undergo large deformations and rotations under the action of environmental loads like the wind and snow loads. The follower effects of the pressure load have been included in the updated Lagrangian formulation of the flat shell element and have been validated using standard examples in the literature involving deformation-dependent pressure loads. The element can be used to obtain the nonlinear response of the tent structure under wind and snow loads. (Abstract shortened by UMI.)

  12. Update Money Management Units in Terms of New Legislation.

    ERIC Educational Resources Information Center

    Manzer, John P.

    1981-01-01

    Provides basic business teachers with a current analysis of the rapidly changing topic of financial institutions and related functions. Elements discussed include recent financial legislation and students' money management decisions. (CT)

  13. Rare Earth Geochemistry of Rock Core form WY Reservoirs

    DOE Data Explorer

    Quillinan, Scott; Bagdonnas, Davin; McLaughlin, J. Fred; Nye, Charles

    2016-10-01

    These data include major, minor, trace and rare earth element concentration of geologic formations in Wyoming oil and gas fields. *Note - Link below contains updated version of spreadsheet (6/14/2017)

  14. Carbonatites of the World, Explored Deposits of Nb and REE - Database and Grade and Tonnage Models

    USGS Publications Warehouse

    Berger, Vladimir I.; Singer, Donald A.; Orris, Greta J.

    2009-01-01

    This report is based on published tonnage and grade data on 58 Nb- and rare-earth-element (REE)-bearing carbonatite deposits that are mostly well explored and are partially mined or contain resources of these elements. The deposits represent only a part of the known 527 carbonatites around the world, but they are characterized by reliable quantitative data on ore tonnages and grades of niobium and REE. Grade and tonnage models are an important component of mineral resource assessments. Carbonatites present one of the main natural sources of niobium and rare-earth elements, the economic importance of which grows consistently. A purpose of this report is to update earlier publications. New information about known deposits, as well as data on new deposits published during the last decade, are incorporated in the present paper. The compiled database (appendix 1; linked to right) contains 60 explored Nb- and REE-bearing carbonatite deposits - resources of 55 of these deposits are taken from publications. In the present updated grade-tonnage model we have added 24 deposits comparing with the previous model of Singer (1998). Resources of most deposits are residuum ores in the upper part of carbonatite bodies. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types are present in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment, and the grade and tonnage models allow economists to estimate the possible economic viability of these resources. Thus, mineral-deposit models play a central role in presenting geoscience information in a useful form to policy makers. The foundation of mineral-deposit models is information about known deposits. This publication presents the latest geologic information and newly developed grade and tonnage models for Nb- and REE-carbonatite deposits in digital form. The publication contains computer files with information on deposits from around the world. It also contains a text file allowing locations of all deposits to be plotted in geographic information system (GIS) programs. The data are presented in FileMaker Pro as well as in .xls and text files to make the information available to a broadly based audience. The value of this information and any derived analyses depends critically on the consistent manner of data gathering. For this reason, we first discuss the rules used in this compilation. Next, the fields of the database are explained. Finally, we provide new grade and tonnage models and analysis of the information in the file.

  15. Complementary hydro-mechanical coupled finite/discrete element and microseismic modelling to predict hydraulic fracture propagation in tight shale reservoirs

    NASA Astrophysics Data System (ADS)

    Profit, Matthew; Dutko, Martin; Yu, Jianguo; Cole, Sarah; Angus, Doug; Baird, Alan

    2016-04-01

    This paper presents a novel approach to predict the propagation of hydraulic fractures in tight shale reservoirs. Many hydraulic fracture modelling schemes assume that the fracture direction is pre-seeded in the problem domain discretisation. This is a severe limitation as the reservoir often contains large numbers of pre-existing fractures that strongly influence the direction of the propagating fracture. To circumvent these shortcomings, a new fracture modelling treatment is proposed where the introduction of discrete fracture surfaces is based on new and dynamically updated geometrical entities rather than the topology of the underlying spatial discretisation. Hydraulic fracturing is an inherently coupled engineering problem with interactions between fluid flow and fracturing when the stress state of the reservoir rock attains a failure criterion. This work follows a staggered hydro-mechanical coupled finite/discrete element approach to capture the key interplay between fluid pressure and fracture growth. In field practice, the fracture growth is hidden from the design engineer and microseismicity is often used to infer hydraulic fracture lengths and directions. Microseismic output can also be computed from changes of the effective stress in the geomechanical model and compared against field microseismicity. A number of hydraulic fracture numerical examples are presented to illustrate the new technology.

  16. A Weather Radar Simulator for the Evaluation of Polarimetric Phased Array Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrd, Andrew D.; Ivic, Igor R.; Palmer, Robert D.

    A radar simulator capable of generating time series data for a polarimetric phased array weather radar has been designed and implemented. The received signals are composed from a high-resolution numerical prediction weather model. Thousands of scattering centers, each with an independent randomly generated Doppler spectrum, populate the field of view of the radar. The moments of the scattering center spectra are derived from the numerical weather model, and the scattering center positions are updated based on the three-dimensional wind field. In order to accurately emulate the effects of the system-induced cross-polar contamination, the array is modeled using a complete setmore » of dual-polarization radiation patterns. The simulator offers reconfigurable element patterns and positions as well as access to independent time series data for each element, resulting in easy implementation of any beamforming method. It also allows for arbitrary waveform designs and is able to model the effects of quantization on waveform performance. Simultaneous, alternating, quasi-simultaneous, and pulse-to-pulse phase coded modes of polarimetric signal transmission have been implemented. This framework allows for realistic emulation of the effects of cross-polar fields on weather observations, as well as the evaluation of possible techniques for the mitigation of those effects.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soti, Zsolt; Magill, Joseph; Pfennig, Gerda

    Following the success of the 8. Edition of the Karlsruhe Nuclide Chart 2012, a new edition is planned for 2015. Since the 2012 edition, more than 100 nuclides have been discovered and about 1400 nuclides have been updated. In summary, the new 9. edition contains decay and radiation data on approximately 3230 ground state nuclides and 740 isomers from 118 chemical elements. The accompanying booklet provides a detailed explanation of the nuclide box structure used in the Chart. An expanded section contains many additional nuclide decay schemes to aid the user to interpret the highly condensed information in the nuclidemore » boxes. The booklet contains - in addition to the latest values of the physical constants and physical properties - a periodic table of the elements, tables of new and updated nuclides, and a difference chart showing the main changes in the Chart graphically. (authors)« less

  18. A point-value enhanced finite volume method based on approximate delta functions

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  19. Spectral-element global waveform tomography: A second-generation upper-mantle model

    NASA Astrophysics Data System (ADS)

    French, S. W.; Lekic, V.; Romanowicz, B. A.

    2012-12-01

    The SEMum model of Lekic and Romanowicz (2011a) was the first global upper-mantle VS model obtained using whole-waveform inversion with spectral element (SEM: Komatitsch and Vilotte, 1998) forward modeling of time domain three component waveforms. SEMum exhibits stronger amplitudes of heterogeneity in the upper 200km of the mantle compared to previous global models - particularly with respect to low-velocity anomalies. To make SEM-based waveform inversion tractable at global scales, SEMum was developed using: (1) a version of SEM coupled to 1D mode computation in the earth's core (C-SEM, Capdeville et al., 2003); (2) asymptotic normal-mode sensitivity kernels, incorporating multiple forward scattering and finite-frequency effects in the great-circle plane (NACT: Li and Romanowicz, 1995); and (3) a smooth anisotropic crustal layer of uniform 60km thickness, designed to match global surface-wave dispersion while reducing the cost of time integration in the SEM. The use of asymptotic kernels reduced the number of SEM computations considerably (≥ 3x) relative to purely numerical approaches (e.g. Tarantola, 1984), while remaining sufficiently accurate at the periods of interest (down to 60s). However, while the choice of a 60km crustal-layer thickness is justifiable in the continents, it can complicate interpretation of shallow oceanic upper-mantle structure. We here present an update to the SEMum model, designed primarily to address these concerns. The resulting model, SEMum2, was derived using a crustal layer that again fits global surface-wave dispersion, but with a more geologically consistent laterally varying thickness: approximately honoring Crust2.0 (Bassin, et al., 2000) Moho depth in the continents, while saturating at 30km in the oceans. We demonstrate that this approach does not bias our upper mantle model, which is constrained not only by fundamental mode surface waves, but also by overtone waveforms. We have also improved our data-selection and assimilation scheme, more readily allowing for additional and higher-quality data to be incorporated into our inversion as the model improves. Further, we have been able to refine the parameterization of the isotropic component of our model, previously limited by our ability to solve the large dense linear system that governs model updates (Tarantola and Valette, 1982). The construction of SEMum2 involved 3 additional inversion iterations away from SEMum. Overall, the combined effect of these improvements confirms and validates the general structure of the original SEMum. Model amplitudes remain an impressive feature in SEMum2, wherein peak-to-peak variation in VS can exceed 15% in close lateral juxtaposition. Further, many intriguing structures present in SEMum are now imaged with improved resolution in the updated model. In particular, the geographic extents of the anomalous oceanic cluster identified by Lekic and Romanowicz (2011b) are consistent with our findings and now allow us to further identify alternating bands of lower and higher velocities in the 200-300km depth range beneath the Pacific basin, with a characteristic spacing of ˜2000km normal to absolute plate motion. Possible dynamic interpretation of these and other features in the ocean basins is explored in a companion presentation (Romanowicz et al., this meeting).

  20. Data model, dictionaries, and desiderata for biomolecular simulation data indexing and sharing

    PubMed Central

    2014-01-01

    Background Few environments have been developed or deployed to widely share biomolecular simulation data or to enable collaborative networks to facilitate data exploration and reuse. As the amount and complexity of data generated by these simulations is dramatically increasing and the methods are being more widely applied, the need for new tools to manage and share this data has become obvious. In this paper we present the results of a process aimed at assessing the needs of the community for data representation standards to guide the implementation of future repositories for biomolecular simulations. Results We introduce a list of common data elements, inspired by previous work, and updated according to feedback from the community collected through a survey and personal interviews. These data elements integrate the concepts for multiple types of computational methods, including quantum chemistry and molecular dynamics. The identified core data elements were organized into a logical model to guide the design of new databases and application programming interfaces. Finally a set of dictionaries was implemented to be used via SQL queries or locally via a Java API built upon the Apache Lucene text-search engine. Conclusions The model and its associated dictionaries provide a simple yet rich representation of the concepts related to biomolecular simulations, which should guide future developments of repositories and more complex terminologies and ontologies. The model still remains extensible through the decomposition of virtual experiments into tasks and parameter sets, and via the use of extended attributes. The benefits of a common logical model for biomolecular simulations was illustrated through various use cases, including data storage, indexing, and presentation. All the models and dictionaries introduced in this paper are available for download at http://ibiomes.chpc.utah.edu/mediawiki/index.php/Downloads. PMID:24484917

  1. Utilizing Flight Data to Update Aeroelastic Stability Estimates

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    Stability analysis of high performance aircraft must account for errors in the system model. A method for computing flutter margins that incorporates flight data has been developed using robust stability theory. This paper considers applying this method to update flutter margins during a post-flight or on-line analysis. Areas of modeling uncertainty that arise when using flight data with this method are investigated. The amount of conservatism in the resulting flutter margins depends on the flight data sets used to update the model. Post-flight updates of flutter margins for an F/A-18 are presented along with a simulation of on-line updates during a flight test.

  2. Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Final Report, Version 2)

    EPA Science Inventory

    EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...

  3. Shift changes, updates, and the on-call architecture in space shuttle mission control.

    PubMed

    Patterson, E S; Woods, D D

    2001-01-01

    In domains such as nuclear power, industrial process control, and space shuttle mission control, there is increased interest in reducing personnel during nominal operations. An essential element in maintaining safe operations in high risk environments with this 'on-call' organizational architecture is to understand how to bring called-in practitioners up to speed quickly during escalating situations. Targeted field observations were conducted to investigate what it means to update a supervisory controller on the status of a continuous, anomaly-driven process in a complex, distributed environment. Sixteen shift changes, or handovers, at the NASA Johnson Space Center were observed during the STS-76 Space Shuttle mission. The findings from this observational study highlight the importance of prior knowledge in the updates and demonstrate how missing updates can leave flight controllers vulnerable to being unprepared. Implications for mitigating risk in the transition to 'on-call' architectures are discussed.

  4. Shift changes, updates, and the on-call architecture in space shuttle mission control

    NASA Technical Reports Server (NTRS)

    Patterson, E. S.; Woods, D. D.

    2001-01-01

    In domains such as nuclear power, industrial process control, and space shuttle mission control, there is increased interest in reducing personnel during nominal operations. An essential element in maintaining safe operations in high risk environments with this 'on-call' organizational architecture is to understand how to bring called-in practitioners up to speed quickly during escalating situations. Targeted field observations were conducted to investigate what it means to update a supervisory controller on the status of a continuous, anomaly-driven process in a complex, distributed environment. Sixteen shift changes, or handovers, at the NASA Johnson Space Center were observed during the STS-76 Space Shuttle mission. The findings from this observational study highlight the importance of prior knowledge in the updates and demonstrate how missing updates can leave flight controllers vulnerable to being unprepared. Implications for mitigating risk in the transition to 'on-call' architectures are discussed.

  5. Application of Artificial Intelligence for Bridge Deterioration Model.

    PubMed

    Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.

  6. Application of Artificial Intelligence for Bridge Deterioration Model

    PubMed Central

    Chen, Zhang; Wu, Yangyang; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121

  7. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site. 2016 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, G. Tim; Hartman, Larry; Stagich, Brooke

    Operations at the Savannah River Site (SRS) result in releases of small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of applicant site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991 and 2010. They are being updated in this report. These parameters include local characteristics of meat, milk andmore » vegetable production; river recreational activities; and meat, milk and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  8. Land and Water Use Characteristics and Human Health Input Parameters for use in Environmental Dosimetry and Risk Assessments at the Savannah River Site 2017 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, T.; Stagich, B.

    Operations at the Savannah River Site (SRS) result in releases of relatively small amounts of radioactive materials to the atmosphere and to the Savannah River. For regulatory compliance purposes, potential offsite radiological doses are estimated annually using computer models that follow U.S. Nuclear Regulatory Commission (NRC) regulatory guides. Within the regulatory guides, default values are provided for many of the dose model parameters, but the use of site-specific values is encouraged. Detailed surveys of land-use and water-use parameters were conducted in 1991, 2008, 2010, and 2016 and are being concurred with or updated in this report. These parameters include localmore » characteristics of meat, milk, and vegetable production; river recreational activities; and meat, milk, and vegetable consumption rates, as well as other human usage parameters required in the SRS dosimetry models. In addition, the preferred elemental bioaccumulation factors and transfer factors (to be used in human health exposure calculations at SRS) are documented. The intent of this report is to establish a standardized source for these parameters that is up to date with existing data, and that is maintained via review of future-issued national references (to evaluate the need for changes as new information is released). These reviews will continue to be added to this document by revision.« less

  9. Decision-theoretic approach to data acquisition for transit operations planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, S.G.

    The most costly element of transportation planning and modeling activities in the past has usually been that of data acquisition. This is even truer today when the unit costs of data collection are increasing rapidly and at the same time budgets are severely limited by continuing policies of fiscal austerity in the public sector. The overall objectives of this research were to improve the decisions and decision-making capabilities of transit operators or planners in short-range transit planning, and to improve the quality and cost-effectiveness of associated route or corridor-level data collection and service monitoring activities. A new approach was presentedmore » for sequentially updating the parameters of both simple and multiple linear regression models with stochastic regressors, and for determining the expected value of sample information and expected net gain of sampling for associated sample designs. A new approach was also presented for estimating and updating (both spatially and temporally) the parameters of multinomial logit discrete choice models, and for determining associated optimal sample designs for attribute-based and choice-based sampling methods. The approach provides an effective framework for addressing the issue of optimal sampling method and sample size, which to date have been largely unresolved. The application of these methodologies and the feasibility of the decision-theoretic approach was illustrated with a hypothetical case study example.« less

  10. Composite sizing and ply orientation for stiffness requirements using a large finite element structural model

    NASA Technical Reports Server (NTRS)

    Radovcich, N. A.; Gentile, D. P.

    1989-01-01

    A NASTRAN bulk dataset preprocessor was developed to facilitate the integration of filamentary composite laminate properties into composite structural resizing for stiffness requirements. The NASCOMP system generates delta stiffness and delta mass matrices for input to the flutter derivative program. The flutter baseline analysis, derivative calculations, and stiffness and mass matrix updates are controlled by engineer defined processes under an operating system called CBUS. A multi-layered design variable grid system permits high fidelity resizing without excessive computer cost. The NASCOMP system uses ply layup drawings for basic input. The aeroelastic resizing for stiffness capability was used during an actual design exercise.

  11. 2013 Advanced Environmental Health/Advanced Food Technology Standing Review Panel Final Report

    NASA Technical Reports Server (NTRS)

    Steinberg, Susan

    2014-01-01

    The 2013 Advanced Environmental Health/Advanced Food Technology (AEH/AFT) Standing Review Panel (from here on referred to as the SRP) participated in a WebEx/teleconference with members of the Space Human Factors and Habitability (SHFH) Element, representatives from the Human Research Program (HRP), and NASA Headquarters on November 22, 2013 (list of participants is in Section IX of this report). The SRP reviewed the updated research plans for the Risk of Adverse Health Effects Due to Alterations in Host-Microorganism Interactions (Host Microbe Risk) and the Risk of Performance Decrement and Crew Illness Due to an Inadequate Food System (Food Risk). The SRP also received a status update on the Risk of Adverse Health Effects of Exposure to Dust and Volatiles during Exploration of Celestial Bodies (Dust Risk). Overall, the SRP was impressed with the strong research plans presented by the scientists and staff associated with the SHFH Element. The SRP also thought that the updated research plans were thorough, well organized, and presented in a comprehensive manner. The SRP agrees with the changes made to the Host Microbe Risk and Food Risk portfolios and thinks that the targets for Gap closure are appropriate.

  12. Overview and Evaluation of the Community Multiscale Air ...

    EPA Pesticide Factsheets

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In late 2016 or early 2017, CMAQ version 5.2 will be released. This new version of CMAQ will contain important updates from the current CMAQv5.1 modeling system, along with several instrumented versions of the model (e.g. decoupled direct method and sulfur tracking). Some specific model updates include the implementation of a new wind-blown dust treatment in CMAQv5.2, a significant improvement over the treatment in v5.1 which can severely overestimate wind-blown dust under certain conditions. Several other major updates to the modeling system include an update to the calculation of aerosols; implementation of full halogen chemistry (CMAQv5.1 contains a partial implementation of halogen chemistry); the new carbon bond 6 (CB6) chemical mechanism; updates to cloud model in CMAQ; and a new lightning assimilation scheme for the WRF model which significant improves the placement and timing of convective precipitation in the WRF precipitation fields. Numerous other updates to the modeling system will also be available in v5.2.

  13. To Your Health: NLM update transcript - Gun safety strategies

    MedlinePlus

    ... elements that range from enforcing prohibited gun purchase laws to better crime detection, suggests a sweeping viewpoint ... Association . The authors, who are attorneys on the law faculties of Georgetown and Stanford Universities, suggest the ...

  14. Finite-element 3D simulation tools for high-current relativistic electron beams

    NASA Astrophysics Data System (ADS)

    Humphries, Stanley; Ekdahl, Carl

    2002-08-01

    The DARHT second-axis injector is a challenge for computer simulations. Electrons are subject to strong beam-generated forces. The fields are fully three-dimensional and accurate calculations at surfaces are critical. We describe methods applied in OmniTrak, a 3D finite-element code suite that can address DARHT and the full range of charged-particle devices. The system handles mesh generation, electrostatics, magnetostatics and self-consistent particle orbits. The MetaMesh program generates meshes of conformal hexahedrons to fit any user geometry. The code has the unique ability to create structured conformal meshes with cubic logic. Organized meshes offer advantages in speed and memory utilization in the orbit and field solutions. OmniTrak is a versatile charged-particle code that handles 3D electric and magnetic field solutions on independent meshes. The program can update both 3D field solutions from the calculated beam space-charge and current-density. We shall describe numerical methods for orbit tracking on a hexahedron mesh. Topics include: 1) identification of elements along the particle trajectory, 2) fast searches and adaptive field calculations, 3) interpolation methods to terminate orbits on material surfaces, 4) automatic particle generation on multiple emission surfaces to model space-charge-limited emission and field emission, 5) flexible Child law algorithms, 6) implementation of the dual potential model for 3D magnetostatics, and 7) assignment of charge and current from model particle orbits for self-consistent fields.

  15. Reliability analysis of laminated CMC components through shell subelement techniques

    NASA Technical Reports Server (NTRS)

    Starlinger, A.; Duffy, S. F.; Gyekenyesi, J. P.

    1992-01-01

    An updated version of the integrated design program C/CARES (composite ceramic analysis and reliability evaluation of structures) was developed for the reliability evaluation of CMC laminated shell components. The algorithm is now split in two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The new interface program from the finite-element code MARC also includes the option of using hybrid laminates and allows for variations in temperature fields throughout the component.

  16. Evaluation of the DRAGON code for VHTR design analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taiwo, T. A.; Kim, T. K.; Nuclear Engineering Division

    2006-01-12

    This letter report summarizes three activities that were undertaken in FY 2005 to gather information on the DRAGON code and to perform limited evaluations of the code performance when used in the analysis of the Very High Temperature Reactor (VHTR) designs. These activities include: (1) Use of the code to model the fuel elements of the helium-cooled and liquid-salt-cooled VHTR designs. Results were compared to those from another deterministic lattice code (WIMS8) and a Monte Carlo code (MCNP). (2) The preliminary assessment of the nuclear data library currently used with the code and libraries that have been provided by themore » IAEA WIMS-D4 Library Update Project (WLUP). (3) DRAGON workshop held to discuss the code capabilities for modeling the VHTR.« less

  17. Computational Simulation of the Formation and Material Behavior of Ice

    NASA Technical Reports Server (NTRS)

    Tong, Michael T.; Singhal, Surendra N.; Chamis, Christos C.

    1994-01-01

    Computational methods are described for simulating the formation and the material behavior of ice in prevailing transient environments. The methodology developed at the NASA Lewis Research Center was adopted. A three dimensional finite-element heat transfer analyzer was used to predict the thickness of ice formed under prevailing environmental conditions. A multi-factor interaction model for simulating the material behavior of time-variant ice layers is presented. The model, used in conjunction with laminated composite mechanics, updates the material properties of an ice block as its thickness increases with time. A sample case of ice formation in a body of water was used to demonstrate the methodology. The results showed that the formation and the material behavior of ice can be computationally simulated using the available composites technology.

  18. Nonlinear Shell Modeling of Thin Membranes with Emphasis on Structural Wrinkling

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Sleight, David W.; Wang, John T.

    2003-01-01

    Thin solar sail membranes of very large span are being envisioned for near-term space missions. One major design issue that is inherent to these very flexible structures is the formation of wrinkling patterns. Structural wrinkles may deteriorate a solar sail's performance and, in certain cases, structural integrity. In this paper, a geometrically nonlinear, updated Lagrangian shell formulation is employed using the ABAQUS finite element code to simulate the formation of wrinkled deformations in thin-film membranes. The restrictive assumptions of true membranes, i.e. Tension Field theory (TF), are not invoked. Two effective modeling strategies are introduced to facilitate convergent solutions of wrinkled equilibrium states. Several numerical studies are carried out, and the results are compared with recent experimental data. Good agreement is observed between the numerical simulations and experimental data.

  19. Service delivery innovation for hospital emergency management using rich organizational modelling.

    PubMed

    Dhakal, Yogit; Bhuiyan, Moshiur; Prasad, Pwc; Krishna, Aneesh

    2018-04-01

    The purpose of this article is to identify and assess service delivery issues within a hospital emergency department and propose an improved model to address them. Possible solutions and options to these issues are explored to determine the one that best fits the context. In this article, we have analysed the emergency department's organizational models through i* strategic dependency and rational modelling technique before proposing updated models that could potentially drive business process efficiencies. The results produced by the models, framework and improved patient journey in the emergency department were evaluated against the statistical data revealed from a reputed government organization related to health, to ensure that the key elements of the issues such as wait time, stay time/throughput, workload and human resource are resolved. The result of the evaluation was taken as a basis to determine the success of the project. Based on these results, the article recommends implementing the concept on actual scenario, where a positive result is achievable.

  20. Updates to Simulation of a Single-Element Lean-Direct Injection Combustor Using Arbitary Polyhedral Meshes

    NASA Technical Reports Server (NTRS)

    Wey, Thomas; Liu, Nan-Suey

    2015-01-01

    This paper summarizes the procedures of (1) generating control volumes anchored at the nodes of a mesh; and (2) generating staggered control volumes via mesh reconstructions, in terms of either mesh realignment or mesh refinement, as well as presents sample results from their applications to the numerical solution of a single-element LDI combustor using a releasable edition of the National Combustion Code (NCC).

  1. Orbital operations study. Appendix A: Interactivity analysis

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Supplemental analyses conducted to verify that safe, feasible, design concepts exist for accomplishing the attendant interface activities of the orbital operations mission are presented. The data are primarily concerned with functions and concepts common to more than one of the interfacing activities or elements. Specific consideration is given to state vector update, payload deployment, communications links, jet plume impingement, attached element operations, docking and structural interface assessment, and propellant transfer.

  2. Potential SPOT-1 R/B-Cosmos 1680 R/B collision

    NASA Technical Reports Server (NTRS)

    Henize, Karl G.; Rast, Richard H.

    1989-01-01

    Detailed NORAD data have revealed updated orbital elements for the Ariane third-stage rocket body that underwent breakup on November 13, 1986, as well as for the Cosmos 1680 rocket body. Applying the maximum expected error due to the extrapolation of orbital elements to the date of the possible collision between the two bodies shows the smallest possible distance between bodies to have been 380 km, thereby precluding collision.

  3. An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Saether, E.; Glaessgen, E.H.; Yamakov, V.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  4. A New Concurrent Multiscale Methodology for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin; Saether, Erik; Glaessgen, Edward H/.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  5. Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling

    NASA Astrophysics Data System (ADS)

    Röhl, S.; Bodenstedt, S.; Küderle, C.; Suwelack, S.; Kenngott, H.; Müller-Stich, B. P.; Dillmann, R.; Speidel, S.

    2012-02-01

    Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the preoperative model. Haptic information which could complement the visual sensor data is still not established. In addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by intraoperative sensors. We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of force measurements, surface reconstruction and biomechanical modeling.

  6. 'DIRTMAP2': Dust and Palaeoclimate.

    NASA Astrophysics Data System (ADS)

    Maher, B.

    2008-12-01

    The influence of dust on climate, through changes in the radiative properties of the atmosphere and/or the CO2 content of the oceans and atmosphere (through iron fertilisation of high nutrient, low chlorophyll, HNLC, regions of the world's oceans), remains a poorly quantified and actively changing element of the Earth's climate system. Dust-cycle models presently employ a relatively simple representation of dust properties; these simplifications may severely limit the realism of simulations of the impact of changes in dust loading on either or both radiative forcing and biogeochemical cycling. Further, whilst state-of-the-art models achieve reasonable estimates of dust deposition in the far-field (i.e. at ocean locations), they under-estimate - by an order of magnitude - levels of dust deposition over the continents, unless glacigenic dust production is explicitly and spatially represented. The 'DIRTMAP2' working group aims to address these problems directly, through a series of explicitly interacting contributions from the international modelling and palaeo-data communities. A key aim of the project is to produce an updated version of the DIRTMAP database ('DIRTMAP2'), incorporating (a) records and age models newly available since ~ 2001, (b) longer records, and especially high-resolution records, that will target time windows also focused on by other international research programs (e.g. DO8/9, MIS5), (c) metadata to allow quality-control issues to be dealt with objectively, (d) information on mineralogy and isotopes relevant to provenancing, radiative forcing and iron bioavailability, and (e) enhanced characterisation of the aeolian component of existing records. This update will be coordinated with work (led by Karen Kohfeld) to expand the DIRTMAP database to incorporate information on marine productivity and improved sedimentation rate estimation techniques. It will also build upon a recently-developed dust model evaluation tool for current climate (e.g. Miller et al. 2006) to enable application of this and other evaluative models to palaeoclimate simulations. We invite colleagues to contribute to this update; the DIRTMAP2 database will shortly be accessible from the University of Lancaster website.

  7. Nonlinear vibrations of thin arbitrarily laminated composite plates subjected to harmonic excitations using DKT elements

    NASA Astrophysics Data System (ADS)

    Chiang, C. K.; Xue, David Y.; Mei, Chuh

    1993-04-01

    A finite element formulation is presented for determining the large-amplitude free and steady-state forced vibration response of arbitrarily laminated anisotropic composite thin plates using the Discrete Kirchhoff Theory (DKT) triangular elements. The nonlinear stiffness and harmonic force matrices of an arbitrarily laminated composite triangular plate element are developed for nonlinear free and forced vibration analyses. The linearized updated-mode method with nonlinear time function approximation is employed for the solution of the system nonlinear eigenvalue equations. The amplitude-frequency relations for convergence with gridwork refinement, triangular plates, different boundary conditions, lamination angles, number of plies, and uniform versus concentrated loads are presented.

  8. Nonlinear vibrations of thin arbitrarily laminated composite plates subjected to harmonic excitations using DKT elements

    NASA Technical Reports Server (NTRS)

    Chiang, C. K.; Xue, David Y.; Mei, Chuh

    1993-01-01

    A finite element formulation is presented for determining the large-amplitude free and steady-state forced vibration response of arbitrarily laminated anisotropic composite thin plates using the Discrete Kirchhoff Theory (DKT) triangular elements. The nonlinear stiffness and harmonic force matrices of an arbitrarily laminated composite triangular plate element are developed for nonlinear free and forced vibration analyses. The linearized updated-mode method with nonlinear time function approximation is employed for the solution of the system nonlinear eigenvalue equations. The amplitude-frequency relations for convergence with gridwork refinement, triangular plates, different boundary conditions, lamination angles, number of plies, and uniform versus concentrated loads are presented.

  9. Nanometer-sized materials for solid-phase extraction of trace elements.

    PubMed

    Hu, Bin; He, Man; Chen, Beibei

    2015-04-01

    This review presents a comprehensive update on the state-of-the-art of nanometer-sized materials in solid-phase extraction (SPE) of trace elements followed by atomic-spectrometry detection. Zero-dimensional nanomaterials (fullerene), one-dimensional nanomaterials (carbon nanotubes, inorganic nanotubes, and nanowires), two-dimensional nanomaterials (nanofibers), and three-dimensional nanomaterials (nanoparticles, mesoporous nanoparticles, magnetic nanoparticles, and dendrimers) for SPE are discussed, with their application for trace-element analysis and their speciation in different matrices. A variety of other novel SPE sorbents, including restricted-access sorbents, ion-imprinted polymers, and metal-organic frameworks, are also discussed, although their applications in trace-element analysis are relatively scarce so far.

  10. Adapting to change: The role of the right hemisphere in mental model building and updating.

    PubMed

    Filipowicz, Alex; Anderson, Britt; Danckert, James

    2016-09-01

    We recently proposed that the right hemisphere plays a crucial role in the processes underlying mental model building and updating. Here, we review the evidence we and others have garnered to support this novel account of right hemisphere function. We begin by presenting evidence from patient work that suggests a critical role for the right hemisphere in the ability to learn from the statistics in the environment (model building) and adapt to environmental change (model updating). We then provide a review of neuroimaging research that highlights a network of brain regions involved in mental model updating. Next, we outline specific roles for particular regions within the network such that the anterior insula is purported to maintain the current model of the environment, the medial prefrontal cortex determines when to explore new or alternative models, and the inferior parietal lobule represents salient and surprising information with respect to the current model. We conclude by proposing some future directions that address some of the outstanding questions in the field of mental model building and updating. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. The 2006 Cape Canaveral Air Force Station Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the National Aeronautics and Space Administration's Space Shuttle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian

    2008-01-01

    The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  12. Adaptation of clinical prediction models for application in local settings.

    PubMed

    Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M

    2012-01-01

    When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.

  13. A review and update of the Virginia Department of Transportation cash flow forecasting model.

    DOT National Transportation Integrated Search

    1996-01-01

    This report details the research done to review and update components of the VDOT cash flow forecasting model. Specifically, the study updated the monthly factors submodel used to predict payments on construction contracts. For the other submodel rev...

  14. Fluids and Combustion Facility: Combustion Integrated Rack Modal Model Correlation

    NASA Technical Reports Server (NTRS)

    McNelis, Mark E.; Suarez, Vicente J.; Sullivan, Timothy L.; Otten, Kim D.; Akers, James C.

    2005-01-01

    The Fluids and Combustion Facility (FCF) is a modular, multi-user, two-rack facility dedicated to combustion and fluids science in the US Laboratory Destiny on the International Space Station. FCF is a permanent facility that is capable of accommodating up to ten combustion and fluid science investigations per year. FCF research in combustion and fluid science supports NASA's Exploration of Space Initiative for on-orbit fire suppression, fire safety, and space system fluids management. The Combustion Integrated Rack (CIR) is one of two racks in the FCF. The CIR major structural elements include the International Standard Payload Rack (ISPR), Experiment Assembly (optics bench and combustion chamber), Air Thermal Control Unit (ATCU), Rack Door, and Lower Structure Assembly (Input/Output Processor and Electrical Power Control Unit). The load path through the rack structure is outlined. The CIR modal survey was conducted to validate the load path predicted by the CIR finite element model (FEM). The modal survey is done by experimentally measuring the CIR frequencies and mode shapes. The CIR model was test correlated by updating the model to represent the test mode shapes. The correlated CIR model delivery is required by NASA JSC at Launch-10.5 months. The test correlated CIR flight FEM is analytically integrated into the Shuttle for a coupled loads analysis of the launch configuration. The analysis frequency range of interest is 0-50 Hz. A coupled loads analysis is the analytical integration of the Shuttle with its cargo element, the Mini Payload Logistics Module (MPLM), in the Shuttle cargo bay. For each Shuttle launch configuration, a verification coupled loads analysis is performed to determine the loads in the cargo bay as part of the structural certification process.

  15. General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagwell, L.; Bennett, P.; Flach, G.

    2017-02-21

    This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).

  16. Automating Phase Change Lines and Their Labels Using Microsoft Excel(R).

    PubMed

    Deochand, Neil

    2017-09-01

    Many researchers have rallied against drawn in graphical elements and offered ways to avoid them, especially regarding the insertion of phase change lines (Deochand, Costello, & Fuqua, 2015; Dubuque, 2015; Vanselow & Bourret, 2012). However, few have offered a solution to automating the phase labels, which are often utilized in behavior analytic graphical displays (Deochand et al., 2015). Despite the fact that Microsoft Excel® is extensively utilized by behavior analysts, solutions to resolve issues in our graphing practices are not always apparent or user-friendly. Considering the insertion of phase change lines and their labels constitute a repetitious and laborious endeavor, any minimization in the steps to accomplish these graphical elements could offer substantial time-savings to the field. The purpose of this report is to provide an updated way (and templates in the supplemental materials) to add phase change lines with their respective labels, which stay embedded to the graph when they are moved or updated.

  17. Test and analysis procedures for updating math models of Space Shuttle payloads

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.

    1991-01-01

    Over the next decade or more, the Space Shuttle will continue to be the primary transportation system for delivering payloads to Earth orbit. Although a number of payloads have already been successfully carried by the Space Shuttle in the payload bay of the Orbiter vehicle, there continues to be a need for evaluation of the procedures used for verifying and updating the math models of the payloads. The verified payload math models is combined with an Orbiter math model for the coupled-loads analysis, which is required before any payload can fly. Several test procedures were employed for obtaining data for use in verifying payload math models and for carrying out the updating of the payload math models. Research was directed at the evaluation of test/update procedures for use in the verification of Space Shuttle payload math models. The following research tasks are summarized: (1) a study of free-interface test procedures; (2) a literature survey and evaluation of model update procedures; and (3) the design and construction of a laboratory payload simulator.

  18. Build-up Approach to Updating the Mock Quiet Spike(TradeMark) Beam Model

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia Y.; Pak, Chan-gi

    2007-01-01

    A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike(TradeMark) (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented in order to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike(TradeMark) project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.

  19. Build-up Approach to Updating the Mock Quiet Spike(TM)Beam Model

    NASA Technical Reports Server (NTRS)

    Herrera, Claudia Y.; Pak, Chan-gi

    2007-01-01

    A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. The NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.

  20. Updates to the Demographic and Spatial Allocation Models to ...

    EPA Pesticide Factsheets

    EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change modeling by providing nationwide housing development scenarios up to 2100. ICLUS V2 includes updated population and land use data sets and addressing limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. [2017 UPDATE] Get the latest version of ICLUS and stay up-to-date by signing up to the ICLUS mailing list. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.

  1. BOADICEA breast cancer risk prediction model: updates to cancer incidences, tumour pathology and web interface

    PubMed Central

    Lee, A J; Cunningham, A P; Kuchenbaecker, K B; Mavaddat, N; Easton, D F; Antoniou, A C

    2014-01-01

    Background: The Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) is a risk prediction model that is used to compute probabilities of carrying mutations in the high-risk breast and ovarian cancer susceptibility genes BRCA1 and BRCA2, and to estimate the future risks of developing breast or ovarian cancer. In this paper, we describe updates to the BOADICEA model that extend its capabilities, make it easier to use in a clinical setting and yield more accurate predictions. Methods: We describe: (1) updates to the statistical model to include cancer incidences from multiple populations; (2) updates to the distributions of tumour pathology characteristics using new data on BRCA1 and BRCA2 mutation carriers and women with breast cancer from the general population; (3) improvements to the computational efficiency of the algorithm so that risk calculations now run substantially faster; and (4) updates to the model's web interface to accommodate these new features and to make it easier to use in a clinical setting. Results: We present results derived using the updated model, and demonstrate that the changes have a significant impact on risk predictions. Conclusion: All updates have been implemented in a new version of the BOADICEA web interface that is now available for general use: http://ccge.medschl.cam.ac.uk/boadicea/. PMID:24346285

  2. Sparse Partial Equilibrium Tables in Chemically Resolved Reactive Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P; Fried, L E; Pudliner, B

    2003-07-14

    The detonation of an energetic material is the result of a complex interaction between kinetic chemical reactions and hydrodynamics. Unfortunately, little is known concerning the detailed chemical kinetics of detonations in energetic materials. CHEETAH uses rate laws to treat species with the slowest chemical reactions, while assuming other chemical species are in equilibrium. CHEETAH supports a wide range of elements and condensed detonation products and can also be applied to gas detonations. A sparse hash table of equation of state values, called the ''cache'' is used in CHEETAH to enhance the efficiency of kinetic reaction calculations. For large-scale parallel hydrodynamicmore » calculations, CHEETAH uses MPI communication to updates to the cache. We present here details of the sparse caching model used in the CHEETAH. To demonstrate the efficiency of modeling using a sparse cache model we consider detonations in energetic materials.« less

  3. Sparse Partial Equilibrium Tables in Chemically Resolved Reactive Flow

    NASA Astrophysics Data System (ADS)

    Vitello, Peter; Fried, Laurence E.; Pudliner, Brian; McAbee, Tom

    2004-07-01

    The detonation of an energetic material is the result of a complex interaction between kinetic chemical reactions and hydrodynamics. Unfortunately, little is known concerning the detailed chemical kinetics of detonations in energetic materials. CHEETAH uses rate laws to treat species with the slowest chemical reactions, while assuming other chemical species are in equilibrium. CHEETAH supports a wide range of elements and condensed detonation products and can also be applied to gas detonations. A sparse hash table of equation of state values is used in CHEETAH to enhance the efficiency of kinetic reaction calculations. For large-scale parallel hydrodynamic calculations, CHEETAH uses parallel communication to updates to the cache. We present here details of the sparse caching model used in the CHEETAH coupled to an ALE hydrocode. To demonstrate the efficiency of modeling using a sparse cache model we consider detonations in energetic materials.

  4. How Deep and Hot was Earth's Magma Ocean? Combined Experimental Datasets for the Metal-silicate Partitioning of 11 Siderophile Elements - Ni, Co, Mo, W, P, Mn, V, Cr, Ga, Cu and Pd

    NASA Technical Reports Server (NTRS)

    Righter, Kevin

    2008-01-01

    Since approximately 1990 high pressure and temperature (PT) experiments on metal-silicate systems have showed that partition coefficients (D) for siderophile (iron-loving) elements are much different than those measured at low PT conditions. The high PT data have been used to argue for a magma ocean during growth of the early Earth. Initial conclusions were based on experiments and calculations for a small number of elements such as Ni and Co. However, for many elements only a limited number of experimental data were available then, and they only hinted at values of metal-silicate D's at high PT conditions. In the ensuing decades there have been hundreds of new experiments carried out and published on a wide range of siderophile elements. At the same time several different models have been advanced to explain the siderophile elements in the earth's mantle: a) intermediate depth magma ocean; 25-30 GPa, b) deep magma ocean; up to 50 GPa, and c) early reduced and later oxidized magma ocean. Some studies have drawn conclusions based on a small subset of siderophile elements, or a set of elements that provides little leverage on the big picture (like slightly siderophile elements), and no single study has attempted to quantitatively explain more than 5 elements at a time. The purpose of this abstract is to update the predictive expressions outlined by Righter et al. (1997) with new experimental data from the last decade, test the predictive ability of these expressions against independent datasets (there are more data now to do this properly), and to apply the resulting expressions to the siderophile element patterns in Earth's upper mantle. The predictive expressions have the form: lnD = alnfO2 + b/T + cP/T + d(1Xs) + e(1Xc) + SigmafiXi + g These expressions are guided by the thermodynamics of simple metal-oxide equilibria that control each element, include terms that mimic the activity coefficients of each element in the metal and silicate, and quantify the effect of variable oxygen fugacity. Preliminary results confirm that D(Ni) and D(Co) converge at pressures near 25-30 GPa and approximately 2200 K, and show that D(Pd) and D(Cu) become too low at the PT conditions of the deepest models. Furthermore, models which force fit V and Cr mantle concentrations by metal-silicate equilibrium overlook the fact that at early Earth mantle fO2, these elements will be more compatible in Mg-perovskite and (Fe,Mg)O than in metal. Thus an intermediate depth magma ocean, at 25-30 GPa, 2200 K, and at IW-2, can explain more mantle siderophile element concentrations than other models.

  5. Dissociable effects of surprise and model update in parietal and anterior cingulate cortex

    PubMed Central

    O’Reilly, Jill X.; Schüffelgen, Urs; Cuell, Steven F.; Behrens, Timothy E. J.; Mars, Rogier B.; Rushworth, Matthew F. S.

    2013-01-01

    Brains use predictive models to facilitate the processing of expected stimuli or planned actions. Under a predictive model, surprising (low probability) stimuli or actions necessitate the immediate reallocation of processing resources, but they can also signal the need to update the underlying predictive model to reflect changes in the environment. Surprise and updating are often correlated in experimental paradigms but are, in fact, distinct constructs that can be formally defined as the Shannon information (IS) and Kullback–Leibler divergence (DKL) associated with an observation. In a saccadic planning task, we observed that distinct behaviors and brain regions are associated with surprise/IS and updating/DKL. Although surprise/IS was associated with behavioral reprogramming as indexed by slower reaction times, as well as with activity in the posterior parietal cortex [human lateral intraparietal area (LIP)], the anterior cingulate cortex (ACC) was specifically activated during updating of the predictive model (DKL). A second saccade-sensitive region in the inferior posterior parietal cortex (human 7a), which has connections to both LIP and ACC, was activated by surprise and modulated by updating. Pupillometry revealed a further dissociation between surprise and updating with an early positive effect of surprise and late negative effect of updating on pupil area. These results give a computational account of the roles of the ACC and two parietal saccade regions, LIP and 7a, by which their involvement in diverse tasks can be understood mechanistically. The dissociation of functional roles between regions within the reorienting/reprogramming network may also inform models of neurological phenomena, such as extinction and Balint syndrome, and neglect. PMID:23986499

  6. Chemical transport model simulations of organic aerosol in ...

    EPA Pesticide Factsheets

    Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data

  7. Implementation of Free-Formulation-Based Flat Shell Elements into NASA Comet Code and Development of Nonlinear Shallow Shell Element

    NASA Technical Reports Server (NTRS)

    Barut, A.; Madenci, Erdogan; Tessler, A.

    1997-01-01

    This study presents a transient nonlinear finite element analysis within the realm of a multi-body dynamics formulation for determining the dynamic response of a moderately thick laminated shell undergoing a rapid and large rotational motion and nonlinear elastic deformations. Nonlinear strain measure and rotation, as well as 'the transverse shear deformation, are explicitly included in the formulation in order to capture the proper motion-induced stiffness of the laminate. The equations of motion are derived from the virtual work principle. The analysis utilizes a shear deformable shallow shell element along with the co-rotational form of the updated Lagrangian formulation. The shallow shell element formulation is based on the Reissner-Mindlin and Marguerre theory.

  8. Insights on multivariate updates of physical and biogeochemical ocean variables using an Ensemble Kalman Filter and an idealized model of upwelling

    NASA Astrophysics Data System (ADS)

    Yu, Liuqian; Fennel, Katja; Bertino, Laurent; Gharamti, Mohamad El; Thompson, Keith R.

    2018-06-01

    Effective data assimilation methods for incorporating observations into marine biogeochemical models are required to improve hindcasts, nowcasts and forecasts of the ocean's biogeochemical state. Recent assimilation efforts have shown that updating model physics alone can degrade biogeochemical fields while only updating biogeochemical variables may not improve a model's predictive skill when the physical fields are inaccurate. Here we systematically investigate whether multivariate updates of physical and biogeochemical model states are superior to only updating either physical or biogeochemical variables. We conducted a series of twin experiments in an idealized ocean channel that experiences wind-driven upwelling. The forecast model was forced with biased wind stress and perturbed biogeochemical model parameters compared to the model run representing the "truth". Taking advantage of the multivariate nature of the deterministic Ensemble Kalman Filter (DEnKF), we assimilated different combinations of synthetic physical (sea surface height, sea surface temperature and temperature profiles) and biogeochemical (surface chlorophyll and nitrate profiles) observations. We show that when biogeochemical and physical properties are highly correlated (e.g., thermocline and nutricline), multivariate updates of both are essential for improving model skill and can be accomplished by assimilating either physical (e.g., temperature profiles) or biogeochemical (e.g., nutrient profiles) observations. In our idealized domain, the improvement is largely due to a better representation of nutrient upwelling, which results in a more accurate nutrient input into the euphotic zone. In contrast, assimilating surface chlorophyll improves the model state only slightly, because surface chlorophyll contains little information about the vertical density structure. We also show that a degradation of the correlation between observed subsurface temperature and nutrient fields, which has been an issue in several previous assimilation studies, can be reduced by multivariate updates of physical and biogeochemical fields.

  9. Investigating the Impact on Modeled Ozone Concentrations Using Meteorological Fields From WRF With and Updated Four-Dimensional Data Assimilation Approach”

    EPA Science Inventory

    The four-dimensional data assimilation (FDDA) technique in the Weather Research and Forecasting (WRF) meteorological model has recently undergone an important update from the original version. Previous evaluation results have demonstrated that the updated FDDA approach in WRF pr...

  10. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    NASA Technical Reports Server (NTRS)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  11. Valence-Dependent Belief Updating: Computational Validation

    PubMed Central

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499

  12. Valence-Dependent Belief Updating: Computational Validation.

    PubMed

    Kuzmanovic, Bojana; Rigoux, Lionel

    2017-01-01

    People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.

  13. Attentional focus affects how events are segmented and updated in narrative reading.

    PubMed

    Bailey, Heather R; Kurby, Christopher A; Sargent, Jesse Q; Zacks, Jeffrey M

    2017-08-01

    Readers generate situation models representing described events, but the nature of these representations may differ depending on the reading goals. We assessed whether instructions to pay attention to different situational dimensions affect how individuals structure their situation models (Exp. 1) and how they update these models when situations change (Exp. 2). In Experiment 1, participants read and segmented narrative texts into events. Some readers were oriented to pay specific attention to characters or space. Sentences containing character or spatial-location changes were perceived as event boundaries-particularly if the reader was oriented to characters or space, respectively. In Experiment 2, participants read narratives and responded to recognition probes throughout the texts. Readers who were oriented to the spatial dimension were more likely to update their situation models at spatial changes; all readers tracked the character dimension. The results from both experiments indicated that attention to individual situational dimensions influences how readers segment and update their situation models. More broadly, the results provide evidence for a global situation model updating mechanism that serves to set up new models at important narrative changes.

  14. Road weather management performance measures : 2012 update.

    DOT National Transportation Integrated Search

    1997-01-01

    The goal of the cost analysis of the ITS National Architecture program is twofold. First, the evaluation is to produce a high-level estimate of the expenditures associated with implementing the physical elements and the functional capabilities of ITS...

  15. SmartWay strategic plan : 2007 annual report

    DOT National Transportation Integrated Search

    2007-12-01

    This document presents an update to the ITS Strategic Plan, effective December 2007. The key plan elements are listed below: a definition of ITS, ITS user services relevant to the agency; performance measures to identify roadway segments and corridor...

  16. Meeting Environmental Requirements after a Bridge Collapse

    DOT National Transportation Integrated Search

    2000-11-01

    This document is an update to the Freeway Incident Management Handbook, published in 1991 by the FHWA. It provides detail on important elements of successful incident management programs, as well as field operations. It includes new and advanced inci...

  17. NASA GSFC Mechanical Engineering Latest Inputs for Verification Standards (GEVS) Updates

    NASA Technical Reports Server (NTRS)

    Kaufman, Daniel

    2003-01-01

    This viewgraph presentation provides information on quality control standards in mechanical engineering. The presentation addresses safety, structural loads, nonmetallic composite structural elements, bonded structural joints, externally induced shock, random vibration, acoustic tests, and mechanical function.

  18. Updating RoadHAT : collision diagram builder and HSM elements.

    DOT National Transportation Integrated Search

    2016-01-01

    In order to minimize the losses resulting from traffic crashes, Indiana developed its road safety management methods before the Highway Safety Manual : and the SafetyAnalyst became available. The considerable cost of replacing the Indiana current pra...

  19. BOPACE 3-D addendum: The Boeing plastic analysis capabilities for 3-dimensional solids using isoparametric finite elements

    NASA Technical Reports Server (NTRS)

    Vos, R. G.; Straayer, J. W.

    1975-01-01

    Modifications and additions incorporated into the BOPACE 3-D program are described. Updates to the program input data formats, error messages, file usage, size limitations, and overlay schematic are included.

  20. Structural and Acoustic Numerical Modeling of a Curved Composite Honeycomb Panel

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Buehrle, Ralph D.; Robinson, Jay H.

    2001-01-01

    The finite and boundary element modeling of the curved section of a composite honeycomb aircraft fuselage sidewall was validated for both structural response and acoustic radiation. The curved panel was modeled in the pre-processor MSC/PATRAN. Geometry models of the curved panel were constructed based on the physical dimensions of the test article. Material properties were obtained from the panel manufacturer. Finite element models were developed to predict the modal parameters for free and supported panel boundary conditions up to a frequency of 600 Hz. Free boundary conditions were simulated by providing soft foam support under the four comers of the panel or by suspending the panel from elastic bands. Supported boundary conditions were obtained by clamping the panel between plastic tubing seated in grooves along the perimeter of a stiff and heavy frame. The frame was installed in the transmission loss window of the Structural Acoustic Loads and Transmission (SALT) facility at NASA Langley Research Center. The structural response of the curved panel due to point force excitation was predicted using MSC/NASTRAN and the radiated sound was computed with COMET/Acoustics. The predictions were compared with the results from experimental modal surveys and forced response tests on the fuselage panel. The finite element models were refined and updated to provide optimum comparison with the measured modal data. Excellent agreement was obtained between the numerical and experimental modal data for the free as well as for the supported boundary conditions. Frequency response functions (FRF) were computed relating the input force excitation at one panel location to the surface acceleration response at five panel locations. Frequency response functions were measured at the same locations on the test specimen and were compared with the calculated FRF values. Good agreement was obtained for the real and imaginary parts of the transfer functions when modal participation was allowed up to 3000 Hz. The validated finite element model was used to predict the surface velocities due to the point force excitation. Good agreement was obtained between the spatial characteristics of the predicted and measured surface velocities. The measured velocity data were input into the acoustic boundary element code to compute the sound radiated by the panel. The predicted sound pressure levels in the far-field of the panel agreed well with the sound pressure levels measured at the same location.

  1. Evaluation of the groundwater-flow model for the Ohio River alluvial aquifer near Carrollton, Kentucky, updated to conditions in September 2010

    USGS Publications Warehouse

    Unthank, Michael D.

    2013-01-01

    The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.

  2. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  3. The 2006 Kennedy Space Center Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the Performance of the National Aeronautics and Space Administration's Space Shuttle Vehicle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl

    2008-01-01

    The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  4. Sequential assimilation of volcanic monitoring data to quantify eruption potential: Application to Kerinci volcano

    NASA Astrophysics Data System (ADS)

    Zhan, Yan; Gregg, Patricia M.; Chaussard, Estelle; Aoki, Yosuke

    2017-12-01

    Quantifying the eruption potential of a restless volcano requires the ability to model parameters such as overpressure and calculate the host rock stress state as the system evolves. A critical challenge is developing a model-data fusion framework to take advantage of observational data and provide updates of the volcanic system through time. The Ensemble Kalman Filter (EnKF) uses a Monte Carlo approach to assimilate volcanic monitoring data and update models of volcanic unrest, providing time-varying estimates of overpressure and stress. Although the EnKF has been proven effective to forecast volcanic deformation using synthetic InSAR and GPS data, until now, it has not been applied to assimilate data from an active volcanic system. In this investigation, the EnKF is used to provide a “hindcast” of the 2009 explosive eruption of Kerinci volcano, Indonesia. A two-sources analytical model is used to simulate the surface deformation of Kerinci volcano observed by InSAR time-series data and to predict the system evolution. A deep, deflating dike-like source reproduces the subsiding signal on the flanks of the volcano, and a shallow spherical McTigue source reproduces the central uplift. EnKF predicted parameters are used in finite element models to calculate the host-rock stress state prior to the 2009 eruption. Mohr-Coulomb failure models reveal that the shallow magma reservoir is trending towards tensile failure prior to 2009, which may be the catalyst for the 2009 eruption. Our results illustrate that the EnKF shows significant promise for future applications to forecasting the eruption potential of restless volcanoes and hind-cast the triggering mechanisms of observed eruptions.

  5. Multistep method to deal with large datasets in asteroid family classification

    NASA Astrophysics Data System (ADS)

    Knežević, Z.; Milani, A.; Cellino, A.; Novaković, B.; Spoto, F.; Paolicchi, P.

    2014-07-01

    A fast increase in the number of asteroids with accurately determined orbits and with known physical properties makes it more and more challenging to perform, maintain, and update a classification of asteroids into families. We have therefore developed a new approach to the family classification by combining the Hierarchical Clustering Method (HCM) [1] to identify the families with an automated method to add members to already known families. This procedure makes use of the maximum available information, in particular, of that contained in the proper elements catalog [2]. The catalog of proper elements and absolute magnitudes used in our study contains 336 319 numbered asteroids with an information content of 16.31 Mb. The WISE catalog of albedos [3] and SDSS catalog of color indexes [4] contain 94 632 and 59 975 entries, respectively, with a total amount of information of 0.93 Mb. Our procedure makes use of the segmentation of the proper elements catalog by semimajor axis, to deal with a manageable number of objects in each zone, and by inclination, to account for lower density of high-inclination objects. By selecting from the catalog a much smaller number of large asteroids, in the first step, we identify a number of core families; to these, in the second step, we attribute the next layer of smaller objects. In the third step, we remove all the family members from the catalog, and reapply the HCM to the rest; this gives both satellite families which extend the core families and new independent families, consisting mainly of small asteroids. These two cases are separated in the fourth step by attribution of another layer of new members and by merging intersecting families. This leads to a classification with 128 families and 87 095 members. The list of members is updated automatically with each update of the proper elements catalog, and this represents the final and repetitive step of the procedure. Changes in the list of families are not automated.

  6. On the physics of the emergence of sensorimotor control in the absence of the brain.

    PubMed

    Matsuno, Koichiro

    2015-12-01

    The evolutionary origin of sensorimotor control requires a sort of physical durability, other than Galilean inertia being accessible in third-person description in the present tense. One candidate to address this need is the 'class property' of a material body's durability remaining invariant during the exchange of component elements. Using grammatical tense as a descriptive attribute, this durability is accessible only in the frequent update of the present perfect tense in the present progressive tense at the 'now' of the present moment. In this view, the update of the perfect tense is equated with the onset and occurrence of on/off switching behavior of physical origin underlying the phenomena of sensorimotor control. Notably, the physical update of the perfect tense is specific only to the 'now and here' that is central in the tradition of phenomenology. The phenomena upholding thermodynamics, when taken apart from its theory, are decisive in facilitating the onset of sensorimotor control. Instrumental to the emergence of both life in general and sensorimotor control in particular may be the occurrence of a 'physical and chemical affinity' of the material bodies of whatever type. Such will let the constant exchange of component elements be feasible, so that the class identity equipped with the capacity for measurement is made available within the phenomenon. Material bodies constantly exchanging such component elements would make the material world open to biology by allowing each element to experience the organizational whole from within. The internal observer responsible for the origins of life may do double duty of letting itself be durable on the material basis while observing the conditions making it durable on the linguistic ground. The origins of life appear to us a material phenomenon when they are approached with use of our linguistic tools that can get rid of the strict stipulation of an abstract nature applied to the description of dynamical laws in physics. Copyright © 2015. Published by Elsevier Ltd.

  7. Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth

    Treesearch

    Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak

    1985-01-01

    GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.

  8. Extension of quality-by-design concept to the early development phase of pharmaceutical R&D processes.

    PubMed

    Csóka, Ildikó; Pallagi, Edina; Paál, Tamás L

    2018-03-27

    Here, we propose the extension of the quality-by-design (QbD) concept to also fit the early development phases of pharmaceuticals by adding elements that are currently widely applied, but not yet included in the QbD model in a structured way. These are the introduction of a 'zero' preformulation phase (i.e., selection of drug substance, possible dosage forms and administration routes based on the evaluated therapeutic need); building in stakeholders' (industry, patient, and regulatory) requirements into the quality target product profile (QTTP); and the use of modern quality management tools during the composition and process design phase [collecting critical quality attributes (CQAs) and selection of CPPs) for (still laboratory-scale) design space (DS) development. Moreover, during industrial scale-up, CQAs (as well as critical process parameters; CPPs) can be changed; however, we recommend that the existing QbD elements are reconsidered and updated after this phase. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Determination of efficiencies, loss mechanisms, and performance degradation factors in chopper controlled dc vehical motors. Section 2: The time dependent finite element modeling of the electromagnetic field in electrical machines: Methods and applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hamilton, H. B.; Strangas, E.

    1980-01-01

    The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.

  10. Oblique Wave-Induced Responses of A VLFS Edged with A Pair of Inclined Perforated Plates

    NASA Astrophysics Data System (ADS)

    Cheng, Yong; Ji, Chun-yan; Zhai, Gang-jun; Oleg, Gaidai

    2018-03-01

    This paper is concerned with the hydroelastic responses of a mat-like, rectangular very large floating structure (VLFS) edged with a pair of horizontal/inclined perforated anti-motion plates in the context of the direct coupling method. The updated Lagrangian formulae are applied to establish the equilibrium equations of the VLFS and the total potential formula is employed for fluids in the numerical model including the viscous effect of the perforated plates through the Darcy's law. The hybrid finite element-boundary element (FE-BE) method is implemented to determine the response reduction of VLFS with attached perforated plates under various oblique incident waves. Also, the numerical solutions are validated against a series of experimental tests. The effectiveness of the attached perforated plates in reducing the deflections of the VLFS can be significantly improved by selecting the proper design parameters such as the porous parameter, submergence depth, plate width and inclination angle for the given sea conditions.

  11. [Researchers training in the context of the collaborative projects: experiences of Instituto de Medicina Tropical "Alexander von Humbolt", Universidad Peruana Cayetano Heredia].

    PubMed

    Gotuzzo, Eduardo; González, Elsa; Verdonck, Kristien

    2010-09-01

    Research is a main element for human and social development. Under this point of view, it involves particular challenges and opportunities for the so-called "developing countries". An approach for those challenges and opportunities comes from the analysis of two interrelated activities; the training of new researchers and the research development with institutions or researchers which are external to the institution ("collaborative research"). Both activities are essential for the consolidation, widening and updating of the institutional capabilities for scientific production. We present here the experiences of the Instituto de Medicina Tropical "Alexander von Humboldt" of the Universidad Peruana Cayetano Heredia, in relation to the training of new researchers, we discuss the four elements we consider key for this process; the promotion of stimulating environments for research, the proactive identification of fellows, the complementary advice and networks consolidation; and we analyze three successful models of international collaboration for the training of new researchers under different institutional approaches.

  12. Finite element model correlation of a composite UAV wing using modal frequencies

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.

    2007-04-01

    The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.

  13. RSAT: regulatory sequence analysis tools.

    PubMed

    Thomas-Chollier, Morgane; Sand, Olivier; Turatsinze, Jean-Valéry; Janky, Rekin's; Defrance, Matthieu; Vervisch, Eric; Brohée, Sylvain; van Helden, Jacques

    2008-07-01

    The regulatory sequence analysis tools (RSAT, http://rsat.ulb.ac.be/rsat/) is a software suite that integrates a wide collection of modular tools for the detection of cis-regulatory elements in genome sequences. The suite includes programs for sequence retrieval, pattern discovery, phylogenetic footprint detection, pattern matching, genome scanning and feature map drawing. Random controls can be performed with random gene selections or by generating random sequences according to a variety of background models (Bernoulli, Markov). Beyond the original word-based pattern-discovery tools (oligo-analysis and dyad-analysis), we recently added a battery of tools for matrix-based detection of cis-acting elements, with some original features (adaptive background models, Markov-chain estimation of P-values) that do not exist in other matrix-based scanning tools. The web server offers an intuitive interface, where each program can be accessed either separately or connected to the other tools. In addition, the tools are now available as web services, enabling their integration in programmatic workflows. Genomes are regularly updated from various genome repositories (NCBI and EnsEMBL) and 682 organisms are currently supported. Since 1998, the tools have been used by several hundreds of researchers from all over the world. Several predictions made with RSAT were validated experimentally and published.

  14. User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.

    MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.

  15. Soil erosion assessment - Mind the gap

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone

    2016-12-01

    Accurate assessment of erosion rates remains an elusive problem because soil loss is strongly nonunique with respect to the main drivers. In addressing the mechanistic causes of erosion responses, we discriminate between macroscale effects of external factors - long studied and referred to as "geomorphic external variability", and microscale effects, introduced as "geomorphic internal variability." The latter source of erosion variations represents the knowledge gap, an overlooked but vital element of geomorphic response, significantly impacting the low predictability skill of deterministic models at field-catchment scales. This is corroborated with experiments using a comprehensive physical model that dynamically updates the soil mass and particle composition. As complete knowledge of microscale conditions for arbitrary location and time is infeasible, we propose that new predictive frameworks of soil erosion should embed stochastic components in deterministic assessments of external and internal types of geomorphic variability.

  16. 33 CFR 385.30 - Master Implementation Sequencing Plan.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... projects of the Plan, including pilot projects and operational elements, based on the best scientific... Florida Water Management District shall also consult with the South Florida Ecosystem Restoration Task...; (ii) Information obtained from pilot projects; (iii) Updated funding information; (iv) Approved...

  17. Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2

    DOE PAGES

    Leggett, Richard W.

    2017-03-02

    Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less

  18. Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leggett, Richard W.

    Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less

  19. The Construction of Visual-spatial Situation Models in Children's Reading and Their Relation to Reading Comprehension

    PubMed Central

    Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.

    2014-01-01

    Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376

  20. A groundwater data assimilation application study in the Heihe mid-reach

    NASA Astrophysics Data System (ADS)

    Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.

    2017-12-01

    The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.

  1. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    EPA Pesticide Factsheets

    The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size distribution, and the model's parameterization of the sea salt emission factor as a function of sea surface temperature. This dataset is associated with the following publication:Gantt , B., J. Kelly , and J. Bash. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 8: 3733-3746, (2015).

  2. Assessing Robustness Properties in Dynamic Discovery of Ad Hoc Network Services (Briefing Charts)

    DTIC Science & Technology

    2001-10-04

    JINI entities in directed -- discovery mode. It is part of the SCM_Discovery -- Module. Sends Unicast messages to SCMs on list of -- SCMS to be...discovered until all SCMS are found. -- Receives updates from SCM DB of discovered SCMs and -- removes SCMs accordingly -- NOTE: Failure and...For All (SM, SD, SCM ): (SM, SD) IsElementOf SCM registered-services (CC1) implies SCM IsElementOf SM discovered- SCMs For All

  3. Three-dimensional magnetotelluric inversion including topography using deformed hexahedral edge finite elements, direct solvers and data space Gauss-Newton, parallelized on SMP computers

    NASA Astrophysics Data System (ADS)

    Kordy, M. A.; Wannamaker, P. E.; Maris, V.; Cherkaev, E.; Hill, G. J.

    2014-12-01

    We have developed an algorithm for 3D simulation and inversion of magnetotelluric (MT) responses using deformable hexahedral finite elements that permits incorporation of topography. Direct solvers parallelized on symmetric multiprocessor (SMP), single-chassis workstations with large RAM are used for the forward solution, parameter jacobians, and model update. The forward simulator, jacobians calculations, as well as synthetic and real data inversion are presented. We use first-order edge elements to represent the secondary electric field (E), yielding accuracy O(h) for E and its curl (magnetic field). For very low frequency or small material admittivity, the E-field requires divergence correction. Using Hodge decomposition, correction may be applied after the forward solution is calculated. It allows accurate E-field solutions in dielectric air. The system matrix factorization is computed using the MUMPS library, which shows moderately good scalability through 12 processor cores but limited gains beyond that. The factored matrix is used to calculate the forward response as well as the jacobians of field and MT responses using the reciprocity theorem. Comparison with other codes demonstrates accuracy of our forward calculations. We consider a popular conductive/resistive double brick structure and several topographic models. In particular, the ability of finite elements to represent smooth topographic slopes permits accurate simulation of refraction of electromagnetic waves normal to the slopes at high frequencies. Run time tests indicate that for meshes as large as 150x150x60 elements, MT forward response and jacobians can be calculated in ~2.5 hours per frequency. For inversion, we implemented data space Gauss-Newton method, which offers reduction in memory requirement and a significant speedup of the parameter step versus model space approach. For dense matrix operations we use tiling approach of PLASMA library, which shows very good scalability. In synthetic inversions we examine the importance of including the topography in the inversion and we test different regularization schemes using weighted second norm of model gradient as well as inverting for a static distortion matrix following Miensopust/Avdeeva approach. We also apply our algorithm to invert MT data collected at Mt St Helens.

  4. A robust component mode synthesis method for stochastic damped vibroacoustics

    NASA Astrophysics Data System (ADS)

    Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine

    2010-01-01

    In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.

  5. The CRISPRdb database and tools to display CRISPRs and to generate dictionaries of spacers and repeats

    PubMed Central

    Grissa, Ibtissem; Vergnaud, Gilles; Pourcel, Christine

    2007-01-01

    Background In Archeae and Bacteria, the repeated elements called CRISPRs for "clustered regularly interspaced short palindromic repeats" are believed to participate in the defence against viruses. Short sequences called spacers are stored in-between repeated elements. In the current model, motifs comprising spacers and repeats may target an invading DNA and lead to its degradation through a proposed mechanism similar to RNA interference. Analysis of intra-species polymorphism shows that new motifs (one spacer and one repeated element) are added in a polarised fashion. Although their principal characteristics have been described, a lot remains to be discovered on the way CRISPRs are created and evolve. As new genome sequences become available it appears necessary to develop automated scanning tools to make available CRISPRs related information and to facilitate additional investigations. Description We have produced a program, CRISPRFinder, which identifies CRISPRs and extracts the repeated and unique sequences. Using this software, a database is constructed which is automatically updated monthly from newly released genome sequences. Additional tools were created to allow the alignment of flanking sequences in search for similarities between different loci and to build dictionaries of unique sequences. To date, almost six hundred CRISPRs have been identified in 475 published genomes. Two Archeae out of thirty-seven and about half of Bacteria do not possess a CRISPR. Fine analysis of repeated sequences strongly supports the current view that new motifs are added at one end of the CRISPR adjacent to the putative promoter. Conclusion It is hoped that availability of a public database, regularly updated and which can be queried on the web will help in further dissecting and understanding CRISPR structure and flanking sequences evolution. Subsequent analyses of the intra-species CRISPR polymorphism will be facilitated by CRISPRFinder and the dictionary creator. CRISPRdb is accessible at PMID:17521438

  6. Simulated and observed 2010 floodwater elevations in selected river reaches in the Pawtuxet River Basin, Rhode Island

    USGS Publications Warehouse

    Zarriello, Phillip J.; Olson, Scott A.; Flynn, Robert H.; Strauch, Kellan R.; Murphy, Elizabeth A.

    2014-01-01

    Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term streamgages in Rhode Island. In response to this event, hydraulic models were updated for selected reaches covering about 56 river miles in the Pawtuxet River Basin to simulate water-surface elevations (WSEs) at specified flows and boundary conditions. Reaches modeled included the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Dry Brook, Meshanticut Brook, Furnace Hill Brook, Flat River, Quidneck Brook, and two unnamed tributaries referred to as South Branch Pawtuxet River Tributary A1 and Tributary A2. All the hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) version 4.1.0 using steady-state simulations. Updates to the models included incorporation of new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were assessed using high-water marks (HWMs) obtained in a related study following the March– April 2010 flood and the simulated water levels at the 0.2-percent annual exceedance probability (AEP), which is the estimated AEP of the 2010 flood in the basin. HWMs were obtained at 110 sites along the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Furnace Hill Brook, Flat River, and Quidneck Brook. Differences between the 2010 HWM elevations and the simulated 0.2-percent AEP WSEs from flood insurance studies (FISs) and the updated models developed in this study varied with most differences attributed to the magnitude of the 0.2-percent AEP flows. WSEs from the updated models generally are in closer agreement with the observed 2010 HWMs than with the FIS WSEs. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.

  7. Imputatoin and Model-Based Updating Technique for Annual Forest Inventories

    Treesearch

    Ronald E. McRoberts

    2001-01-01

    The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...

  8. Acoustic-Modal Testing of the Ares I Launch Abort System Attitude Control Motor Valve

    NASA Technical Reports Server (NTRS)

    Davis, R. Benjamin; Fischbach, Sean R.

    2010-01-01

    The Attitude Control Motor (ACM) is being developed for use in the Launch Abort System (LAS) of NASA's Ares I launch vehicle. The ACM consists of a small solid rocket motor and eight actuated pintle valves that directionally allocate.thrust_- 1t.has-been- predicted-that significant unsteady. pressure.fluctuations.will.exist. inside the-valves during operation. The dominant frequencies of these oscillations correspond to the lowest several acoustic natural frequencies of the individual valves. An acoustic finite element model of the fluid volume inside the valve has been critical to the prediction of these frequencies and their associated mode shapes. This work describes an effort to experimentally validate the acoustic finite model of the valve with an acoustic modal test. The modal test involved instrumenting a flight-like valve with six microphones and then exciting the enclosed air with a loudspeaker. The loudspeaker was configured to deliver broadband noise at relatively high sound pressure levels. The aquired microphone signals were post-processed and compared to results generated from the acoustic finite element model. Initial comparisons between the test data and the model results revealed that additional model refinement was necessary. Specifically, the model was updated to implement a complex impedance boundary condition at the entrance to the valve supply tube. This boundary condition models the frequency-dependent impedance that an acoustic wave will encounter as it reaches the end of the supply tube. Upon invoking this boundary condition, significantly improved agreement between the test data and the model was realized.

  9. Seismic tomography of the southern California crust based on spectral-element and adjoint methods

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Maggi, Alessia; Tromp, Jeroen

    2010-01-01

    We iteratively improve a 3-D tomographic model of the southern California crust using numerical simulations of seismic wave propagation based on a spectral-element method (SEM) in combination with an adjoint method. The initial 3-D model is provided by the Southern California Earthquake Center. The data set comprises three-component seismic waveforms (i.e. both body and surface waves), filtered over the period range 2-30 s, from 143 local earthquakes recorded by a network of 203 stations. Time windows for measurements are automatically selected by the FLEXWIN algorithm. The misfit function in the tomographic inversion is based on frequency-dependent multitaper traveltime differences. The gradient of the misfit function and related finite-frequency sensitivity kernels for each earthquake are computed using an adjoint technique. The kernels are combined using a source subspace projection method to compute a model update at each iteration of a gradient-based minimization algorithm. The inversion involved 16 iterations, which required 6800 wavefield simulations. The new crustal model, m16, is described in terms of independent shear (VS) and bulk-sound (VB) wave speed variations. It exhibits strong heterogeneity, including local changes of +/-30 per cent with respect to the initial 3-D model. The model reveals several features that relate to geological observations, such as sedimentary basins, exhumed batholiths, and contrasting lithologies across faults. The quality of the new model is validated by quantifying waveform misfits of full-length seismograms from 91 earthquakes that were not used in the tomographic inversion. The new model provides more accurate synthetic seismograms that will benefit seismic hazard assessment.

  10. Machine learning in updating predictive models of planning and scheduling transportation projects

    DOT National Transportation Integrated Search

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  11. Benefits of Model Updating: A Case Study Using the Micro-Precision Interferometer Testbed

    NASA Technical Reports Server (NTRS)

    Neat, Gregory W.; Kissil, Andrew; Joshi, Sanjay S.

    1997-01-01

    This paper presents a case study on the benefits of model updating using the Micro-Precision Interferometer (MPI) testbed, a full-scale model of a future spaceborne optical interferometer located at JPL.

  12. Preliminary Model of Porphyry Copper Deposits

    USGS Publications Warehouse

    Berger, Byron R.; Ayuso, Robert A.; Wynn, Jeffrey C.; Seal, Robert R.

    2008-01-01

    The U.S. Geological Survey (USGS) Mineral Resources Program develops mineral-deposit models for application in USGS mineral-resource assessments and other mineral resource-related activities within the USGS as well as for nongovernmental applications. Periodic updates of models are published in order to incorporate new concepts and findings on the occurrence, nature, and origin of specific mineral deposit types. This update is a preliminary model of porphyry copper deposits that begins an update process of porphyry copper models published in USGS Bulletin 1693 in 1986. This update includes a greater variety of deposit attributes than were included in the 1986 model as well as more information about each attribute. It also includes an expanded discussion of geophysical and remote sensing attributes and tools useful in resource evaluations, a summary of current theoretical concepts of porphyry copper deposit genesis, and a summary of the environmental attributes of unmined and mined deposits.

  13. Advanced Distribution Management Systems | Grid Modernization | NREL

    Science.gov Websites

    Advanced Distribution Management Systems Advanced Distribution Management Systems Electric utilities are investing in updated grid technologies such as advanced distribution management systems to management testbed for cyber security in power systems. The "advanced" elements of advanced

  14. 78 FR 78939 - 36(b)(1) Arms Sales Notification

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-27

    ... Quantity or Quantities of Articles or Services under Consideration for Purchase: C-130J technical, engineering and software support; software updates and patches; familiarization training for Portable Flight... and contractor technical support services; and other related elements of logistics and program support...

  15. The Essence of Montessori. Spotlight: Updating Our Agendas.

    ERIC Educational Resources Information Center

    Loeffler, Margaret H.

    2002-01-01

    Discusses the essential elements of Montessori educational philosophy and theory, focusing on the integration, development, and maintenance of the four characteristics of normalization (concentration, work, discipline, sociability) into adulthood. Discusses Montessori's view that development and retention of these positive characteristics could be…

  16. Parallel updating and weighting of multiple spatial maps for visual stability during whole body motion

    PubMed Central

    Medendorp, W. P.

    2015-01-01

    It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289

  17. [Purity Detection Model Update of Maize Seeds Based on Active Learning].

    PubMed

    Tang, Jin-ya; Huang, Min; Zhu, Qi-bing

    2015-08-01

    Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.

  18. Simulated and observed 2010 floodwater elevations in the Pawcatuck and Wood Rivers, Rhode Island

    USGS Publications Warehouse

    Zarriello, Phillip J.; Straub, David E.; Smith, Thor E.

    2014-01-01

    Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term U.S. Geological Survey streamgages in Rhode Island. In response to this flood, hydraulic models of Pawcatuck River (26.9 miles) and Wood River (11.6 miles) were updated from the most recent approved U.S. Department of Homeland Security-Federal Emergency Management Agency flood insurance study (FIS) to simulate water-surface elevations (WSEs) for specified flows and boundary conditions. The hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) using steady-state simulations and incorporate new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were used to simulate the 0.2-percent annual exceedance probability (AEP) flood, which is the AEP determined for the 2010 flood in the Pawcatuck and Wood Rivers. The simulated WSEs were compared to high-water mark (HWM) elevation data obtained in a related study following the March–April 2010 flood, which included 39 HWMs along the Pawcatuck River and 11 HWMs along the Wood River. The 2010 peak flow generally was larger than the 0.2-percent AEP flow, which, in part, resulted in the FIS and updated model WSEs to be lower than the 2010 HWMs. The 2010 HWMs for the Pawcatuck River averaged about 1.6 feet (ft) higher than the 0.2-percent AEP WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The 2010 HWMs for the Wood River averaged about 1.3 ft higher than the WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.

  19. Management of undescended testis may be improved with educational updates and new transferring model.

    PubMed

    Yi, Wei; Sheng-de, Wu; Lian-Ju, Shen; Tao, Lin; Da-Wei, He; Guang-Hui, Wei

    2018-05-24

    To investigate whether management of undescended testis (UDT) may be improved with educational updates and new transferring model among referring providers (RPs). The age of orchidopexies performed in Children's Hospital of Chongqing Medical University were reviewed. We then proposed educational updates and new transferring model among RPs. The age of orchidopexies performed after our intervention were collected. Data were represented graphically and statistical analysis Chi-square for trend were used. A total of 1543 orchidopexies were performed. The median age of orchidopexy did not matched the target age of 6-12 months in any subsequent year. Survey of the RPs showed that 48.85% of their recommended age was below 12 months. However, only 25.50% of them would directly make a surgical referral to pediatric surgery specifically at this point. After we proposed educational updates, tracking the age of orchidopexy revealed a statistically significant trend downward. The management of undescended testis may be improved with educational updates and new transferring model among primary healthcare practitioners.

  20. Updating the Nomographical Diagrams for Dimensioning the Beams

    NASA Astrophysics Data System (ADS)

    Pop, Maria T.

    2015-12-01

    In order to reduce the time period needed for structures design it is strongly recommended to use nomographical diagrams. The base for formation and updating the nomographical diagrams, stands on the charts presented by different technical publications. The updated charts use the same algorithm and calculation elements as the former diagrams in accordance to the latest prescriptions and European standards. The result consists in a chart, having the same properties, similar with the nomogragraphical diagrams already in us. As a general conclusion, even in our days, the nomographical diagrams are very easy to use. Taking into consideration the value of the moment it's easy to find out the necessary reinforcement area and vice-verse, having the reinforcement area you can find out the capable moment. It still remains a useful opportunity for pre-sizing and designs the reinforced concrete sections.

  1. The Navy Precision Optical Interferometer: an update

    NASA Astrophysics Data System (ADS)

    Armstrong, J. T.; Baines, Ellyn K.; Schmitt, Henrique R.; Restaino, Sergio R.; Clark, James H.; Benson, James A.; Hutter, Donald J.; Zavala, Robert T.; van Belle, Gerard T.

    2016-08-01

    We describe the current status of the Navy Precision Optical Interferometer (NPOI), including developments since the last SPIE meeting. The NPOI group has added stations as far as 250m from the array center and added numerous infrastructure improvements. Science programs include stellar diameters and limb darkening, binary orbits, Be star disks, exoplanet host stars, and progress toward high-resolution stellar surface imaging. Technical and infrastructure projects include on-sky demonstrations of baseline bootstrapping with six array elements and of the VISION beam combiner, control system updates, integration of the long delay lines, and updated firmware for the Classic beam combiner. Our plans to add up to four 1.8 m telescopes are no longer viable, but we have recently acquired separate funding for adding three 1 m AO-equipped telescopes and an infrared beam combiner to the array.

  2. Evolution of plastic anisotropy for high-strain-rate computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.; Maudlin, P.J.

    1994-12-01

    A model for anisotropic material strength, and for changes in the anisotropy due to plastic strain, is described. This model has been developed for use in high-rate, explicit, Lagrangian multidimensional continuum-mechanics codes. The model handles anisotropies in single-phase materials, in particular the anisotropies due to crystallographic texture--preferred orientations of the single-crystal grains. Textural anisotropies, and the changes in these anisotropies, depend overwhelmingly no the crystal structure of the material and on the deformation history. The changes, particularly for a complex deformations, are not amenable to simple analytical forms. To handle this problem, the material model described here includes a texturemore » code, or micromechanical calculation, coupled to a continuum code. The texture code updates grain orientations as a function of tensor plastic strain, and calculates the yield strength in different directions. A yield function is fitted to these yield points. For each computational cell in the continuum simulation, the texture code tracks a particular set of grain orientations. The orientations will change due to the tensor strain history, and the yield function will change accordingly. Hence, the continuum code supplies a tensor strain to the texture code, and the texture code supplies an updated yield function to the continuum code. Since significant texture changes require relatively large strains--typically, a few percent or more--the texture code is not called very often, and the increase in computer time is not excessive. The model was implemented, using a finite-element continuum code and a texture code specialized for hexagonal-close-packed crystal structures. The results for several uniaxial stress problems and an explosive-forming problem are shown.« less

  3. Retractor-induced brain shift compensation in image-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Fan, Xiaoyao; Ji, Songbai; Hartov, Alex; Roberts, David; Paulsen, Keith

    2013-03-01

    In image-guided neurosurgery, intraoperative brain shift significantly degrades the accuracy of neuronavigation that is solely based on preoperative magnetic resonance images (pMR). To compensate for brain deformation and to maintain the accuracy in image guidance achieved at the start of surgery, biomechanical models have been developed to simulate brain deformation and to produce model-updated MR images (uMR) to compensate for brain shift. To-date, most studies have focused on shift compensation at early stages of surgery (i.e., updated images are only produced after craniotomy and durotomy). Simulating surgical events at later stages such as retraction and tissue resection are, perhaps, clinically more relevant because of the typically much larger magnitudes of brain deformation. However, these surgical events are substantially more complex in nature, thereby posing significant challenges in model-based brain shift compensation strategies. In this study, we present results from an initial investigation to simulate retractor-induced brain deformation through a biomechanical finite element (FE) model where whole-brain deformation assimilated from intraoperative data was used produce uMR for improved accuracy in image guidance. Specifically, intensity-encoded 3D surface profiles at the exposed cortical area were reconstructed from intraoperative stereovision (iSV) images before and after tissue retraction. Retractor-induced surface displacements were then derived by coregistering the surfaces and served as sparse displacement data to drive the FE model. With one patient case, we show that our technique is able to produce uMR that agrees well with the reconstructed iSV surface after retraction. The computational cost to simulate retractor-induced brain deformation was approximately 10 min. In addition, our approach introduces minimal interruption to the surgical workflow, suggesting the potential for its clinical application.

  4. Formulation of consumables management models: Consumables flight planning worksheet update. [space shuttles

    NASA Technical Reports Server (NTRS)

    Newman, C. M.

    1977-01-01

    The updated consumables flight planning worksheet (CFPWS) is documented. The update includes: (1) additional consumables: ECLSS ammonia, APU propellant, HYD water; (2) additional on orbit activity for development flight instrumentation (DFI); (3) updated use factors for all consumables; and (4) sources and derivations of the use factors.

  5. Nonequivalence of updating rules in evolutionary games under high mutation rates.

    PubMed

    Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J

    2014-10-01

    Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.

  6. Nonequivalence of updating rules in evolutionary games under high mutation rates

    NASA Astrophysics Data System (ADS)

    Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.

    2014-10-01

    Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.

  7. Finite element implementation of a thermo-damage-viscoelastic constitutive model for hydroxyl-terminated polybutadiene composite propellant

    NASA Astrophysics Data System (ADS)

    Xu, Jinsheng; Han, Long; Zheng, Jian; Chen, Xiong; Zhou, Changsheng

    2017-11-01

    A thermo-damage-viscoelastic model for hydroxyl-terminated polybutadiene (HTPB) composite propellant with consideration for the effect of temperature was implemented in ABAQUS. The damage evolution law of the model has the same form as the crack growth equation for viscoelastic materials, and only a single damage variable S is considered. The HTPB propellant was considered as an isotropic material, and the deviatoric and volumetric strain-stress relations are decoupled and described by the bulk and shear relaxation moduli, respectively. The stress update equations were expressed by the principal stresses σ_{ii}R and the rotation tensor M, the Jacobian matrix in the global coordinate system J_{ijkl} was obtained according to the fourth-order tensor transformation rules. Two models having complex stress states were used to verify the accuracy of the constitutive model. The test results showed good agreement with the strain responses of characteristic points measured by a contactless optical deformation test system, which illustrates that the thermo-damage-viscoelastic model perform well at describing the mechanical properties of an HTPB propellant.

  8. Finite Element Simulation of Compression Molding of Woven Fabric Carbon Fiber/Epoxy Composites: Part I Material Model Development

    DOE PAGES

    Li, Yang; Zhao, Qiangsheng; Mirdamadi, Mansour; ...

    2016-01-06

    Woven fabric carbon fiber/epoxy composites made through compression molding are one of the promising choices of material for the vehicle light-weighting strategy. Previous studies have shown that the processing conditions can have substantial influence on the performance of this type of the material. Therefore the optimization of the compression molding process is of great importance to the manufacturing practice. An efficient way to achieve the optimized design of this process would be through conducting finite element (FE) simulations of compression molding for woven fabric carbon fiber/epoxy composites. However, performing such simulation remains a challenging task for FE as multiple typesmore » of physics are involved during the compression molding process, including the epoxy resin curing and the complex mechanical behavior of woven fabric structure. In the present study, the FE simulation of the compression molding process of resin based woven fabric composites at continuum level is conducted, which is enabled by the implementation of an integrated material modeling methodology in LS-Dyna. Specifically, the chemo-thermo-mechanical problem of compression molding is solved through the coupling of three material models, i.e., one thermal model for temperature history in the resin, one mechanical model to update the curing-dependent properties of the resin and another mechanical model to simulate the behavior of the woven fabric composites. Preliminary simulations of the carbon fiber/epoxy woven fabric composites in LS-Dyna are presented as a demonstration, while validations and models with real part geometry are planned in the future work.« less

  9. Isotopic compositions of the elements 2013 (IUPAC Technical Report)

    USGS Publications Warehouse

    Meija, Juris; Coplen, Tyler B.; Berglund, Michael; Brand, Willi A.; De Bièvre, Paul; Gröning, Manfred; Holden, Norman E.; Irrgeher, Johanna; Loss, Robert D.; Walczyk, Thomas; Prohaska, Thomas

    2016-01-01

    The Commission on Isotopic Abundances and Atomic Weights (ciaaw.org) of the International Union of Pure and Applied Chemistry (iupac.org) has revised the Table of Isotopic Compositions of the Elements (TICE). The update involved a critical evaluation of the recent published literature. The new TICE 2013 includes evaluated data from the “best measurement” of the isotopic abundances in a single sample, along with a set of representative isotopic abundances and uncertainties that accommodate known variations in normal terrestrial materials.

  10. Nonapplicability of linear finite element programs to the stress analysis of tires

    NASA Technical Reports Server (NTRS)

    Durand, M.; Jankovich, E.

    1972-01-01

    A static finite element stress analysis of an inflated radial car tire was carried out. The deformed shape of the sidewall presents outward bulging. The analysis of a homogeneous isotropic toroidal shell shows that the problem is common to all solids of this type. The study suggests that the geometric stiffness due to the inflation pressure has to be taken into account. Also, the resulting large displacements make it necessary for the geometry to be updated at each load step.

  11. The diagnostic value of specific IgE to Ara h 2 to predict peanut allergy in children is comparable to a validated and updated diagnostic prediction model.

    PubMed

    Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A

    2013-01-01

    A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P < .001). In the updating process, age, history, and additional candidate predictors did not significantly increase discrimination, being 94%, and leaving only 4 predictors of the original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  12. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  13. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  14. An interval model updating strategy using interval response surface models

    NASA Astrophysics Data System (ADS)

    Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin

    2015-08-01

    Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.

  15. Working paper : national costs of the metropolitan ITS infrastructure : update to the FHWA 1995 report

    DOT National Transportation Integrated Search

    2001-07-01

    This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...

  16. Working Paper : national costs of the metropolitan ITS infrastructure : update to the FHWA 1995 report

    DOT National Transportation Integrated Search

    2000-08-01

    This working paper has been prepared to provide new estimates of the costs to deploy Intelligent Transportation System (ITS) infrastructure elements in the largest metropolitan areas in the United States. It builds upon estimates that were distribute...

  17. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion.

    PubMed

    Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D

    2013-07-01

    To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.

  18. Updating the Behavior Engineering Model.

    ERIC Educational Resources Information Center

    Chevalier, Roger

    2003-01-01

    Considers Thomas Gilbert's Behavior Engineering Model as a tool for systematically identifying barriers to individual and organizational performance. Includes a detailed case study and a performance aid that incorporates gap analysis, cause analysis, and force field analysis to update the original model. (Author/LRW)

  19. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    USGS Publications Warehouse

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  20. Information dissemination model for social media with constant updates

    NASA Astrophysics Data System (ADS)

    Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui

    2018-07-01

    With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.

  1. Seismic hazard in the eastern United States

    USGS Publications Warehouse

    Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison

    2015-01-01

    The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.

  2. Gaining ground in the modeling of land-use change greenhouse gas emissions associated with biofuel production

    NASA Astrophysics Data System (ADS)

    Dunn, J.; Mueller, S.; Kwon, H.; Wang, M.; Wander, M.

    2012-12-01

    Land-use change (LUC) resulting from biofuel feedstock production and the associated greenhouse gas (GHG) emissions are a hotly-debated aspect of biofuels. Certainly, LUC GHG emissions are one of the most uncertain elements in life cycle analyses (LCA) of biofuels. To estimate LUC GHG emissions, two sets of data are necessary. First, information on the amount and type of land that is converted to biofuel feedstock production is required. These data are typically generated through application of computable general equilibrium (CGE) models such as Purdue University's Global Trade Analysis Project (GTAP) model. Second, soil carbon content data for the affected land types is essential. Recently, Argonne National Laboratory's Carbon Calculator for Land Use Change from Biofuels Production (CCLUB) has been updated with CGE modeling results that estimate the amount and type of LUC world-wide from production of ethanol from corn, corn stover, miscanthus, and switchgrass (Mueller et al. 2012). Moreover, we have developed state-specific carbon content data, determined through modeling with CENTURY, for the two most dominant soil types in the conterminous 48 U.S. states (Kwon et al. 2012) to enable finer-resolution results for domestic LUC GHG emissions for these ethanol production scenarios. Of the feedstocks examined, CCLUB estimates that LUC GHG emissions are highest for corn ethanol (9.1 g CO2e/MJ ethanol) and lowest for miscanthus (-12 g CO2e/MJ ethanol). We will present key observations from CCLUB results incorporated into Argonne National Laboratory's Greenhouse Gases, Regulated Emissions, and Energy use in Transportation (GREET) model, which is a LCA tool for transportation fuels and advanced vehicle technologies. We will discuss selected issues in this modeling, including the sensitivity of domestic soil carbon emission factors to modeling parameters and assumptions about the fate of harvested wood products. Further, we will discuss efforts to update CCLUB with county-level soil carbon emission factors and updated international soil carbon emission factors. Finally, we will examine data needs for improved LUC GHG calculations in both the modeling of land conversion and soil carbon content. Kwon, H. Y., Wander, M. M., Mueller, S., Dunn, J. B. "Modeling state-level soil carbon emission factors under various scenarios for direct land use change associated with United States biofuel feedstock production." Biomass and Bioenergy. Under Review. Mueller, S., Dunn, J. B., Wang, M. "Carbon Calculator for Land Use Change from Biofuels Production (CCLUB) Users' Manual and Technical Documentation." May 2012. ANL/ESD/12-5. Available at http://greet.es.anl.gov/publication-cclub-manual.

  3. Disruption of the Right Temporoparietal Junction Impairs Probabilistic Belief Updating.

    PubMed

    Mengotti, Paola; Dombert, Pascasie L; Fink, Gereon R; Vossel, Simone

    2017-05-31

    Generating and updating probabilistic models of the environment is a fundamental modus operandi of the human brain. Although crucial for various cognitive functions, the neural mechanisms of these inference processes remain to be elucidated. Here, we show the causal involvement of the right temporoparietal junction (rTPJ) in updating probabilistic beliefs and we provide new insights into the chronometry of the process by combining online transcranial magnetic stimulation (TMS) with computational modeling of behavioral responses. Female and male participants performed a modified location-cueing paradigm, where false information about the percentage of cue validity (%CV) was provided in half of the experimental blocks to prompt updating of prior expectations. Online double-pulse TMS over rTPJ 300 ms (but not 50 ms) after target appearance selectively decreased participants' updating of false prior beliefs concerning %CV, reflected in a decreased learning rate of a Rescorla-Wagner model. Online TMS over rTPJ also impacted on participants' explicit beliefs, causing them to overestimate %CV. These results confirm the involvement of rTPJ in updating of probabilistic beliefs, thereby advancing our understanding of this area's function during cognitive processing. SIGNIFICANCE STATEMENT Contemporary views propose that the brain maintains probabilistic models of the world to minimize surprise about sensory inputs. Here, we provide evidence that the right temporoparietal junction (rTPJ) is causally involved in this process. Because neuroimaging has suggested that rTPJ is implicated in divergent cognitive domains, the demonstration of an involvement in updating internal models provides a novel unifying explanation for these findings. We used computational modeling to characterize how participants change their beliefs after new observations. By interfering with rTPJ activity through online transcranial magnetic stimulation, we showed that participants were less able to update prior beliefs with TMS delivered at 300 ms after target onset. Copyright © 2017 the authors 0270-6474/17/375419-10$15.00/0.

  4. FORCARB2: An updated version of the U.S. Forest Carbon Budget Model

    Treesearch

    Linda S. Heath; Michael C. Nichols; James E. Smith; John R. Mills

    2010-01-01

    FORCARB2, an updated version of the U.S. FORest CARBon Budget Model (FORCARB), produces estimates of carbon stocks and stock changes for forest ecosystems and forest products at 5-year intervals. FORCARB2 includes a new methodology for carbon in harvested wood products, updated initial inventory data, a revised algorithm for dead wood, and now includes public forest...

  5. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    PubMed

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  6. Highly efficient model updating for structural condition assessment of large-scale bridges.

    DOT National Transportation Integrated Search

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  7. UPDATE ON EPA'S URBAN WATERSHED MANAGEMENT BRANCH MODELING ACTIVITIES

    EPA Science Inventory

    This paper provides the Stormwater Management Model (SWMM) user community with a description of the Environmental Protection Agency (EPA's) Office of Research and Development (ORD) approach to urban watershed modeling research and provides an update on current ORD SWMM-related pr...

  8. GEOCAB Portal: A gateway for discovering and accessing capacity building resources in Earth Observation

    NASA Astrophysics Data System (ADS)

    Desconnets, Jean-Christophe; Giuliani, Gregory; Guigoz, Yaniss; Lacroix, Pierre; Mlisa, Andiswa; Noort, Mark; Ray, Nicolas; Searby, Nancy D.

    2017-02-01

    The discovery of and access to capacity building resources are often essential to conduct environmental projects based on Earth Observation (EO) resources, whether they are Earth Observation products, methodological tools, techniques, organizations that impart training in these techniques or even projects that have shown practical achievements. Recognizing this opportunity and need, the European Commission through two FP7 projects jointly with the Group on Earth Observations (GEO) teamed up with the Committee on Earth observation Satellites (CEOS). The Global Earth Observation CApacity Building (GEOCAB) portal aims at compiling all current capacity building efforts on the use of EO data for societal benefits into an easily updateable and user-friendly portal. GEOCAB offers a faceted search to improve user discovery experience with a fully interactive world map with all inventoried projects and activities. This paper focuses on the conceptual framework used to implement the underlying platform. An ISO19115 metadata model associated with a terminological repository are the core elements that provide a semantic search application and an interoperable discovery service. The organization and the contribution of different user communities to ensure the management and the update of the content of GEOCAB are addressed.

  9. Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong

    2018-02-01

    Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.

  10. Mental models at work: cognitive causes and consequences of conflict in organizations.

    PubMed

    Halevy, Nir; Cohen, Taya R; Chou, Eileen Y; Katz, James J; Panter, A T

    2014-01-01

    This research investigated the reciprocal relationship between mental models of conflict and various forms of dysfunctional social relations in organizations, including experiences of task and relationship conflicts, interpersonal hostility, workplace ostracism, and abusive supervision. We conceptualize individual differences in conflict construals as reflecting variation in people's belief structures about conflict and explore how different elements in people's associative networks-in particular, their beliefs about their best and worst strategy in conflict-relate to their personality, shape their experiences of workplace conflict, and influence others' behavioral intentions toward them. Five studies using a variety of methods (including cross-sectional surveys, a 12-week longitudinal diary study, and an experiment) show that the best strategy beliefs relate in theoretically meaningful ways to individuals' personality, shape social interactions and relationships significantly more than the worst strategy beliefs, and are updated over time as a result of individuals' ongoing experiences of conflict.

  11. Selective updating of working memory content modulates meso-cortico-striatal activity.

    PubMed

    Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S

    2011-08-01

    Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.

  12. Modularized seismic full waveform inversion based on waveform sensitivity kernels - The software package ASKI

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel

    2015-04-01

    We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.

  13. NASTRAN computer system level 12.1

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1971-01-01

    Program uses finite element displacement method for solving linear response of large, three-dimensional structures subject to static, dynamic, thermal, and random loadings. Program adapts to computers of different manufacture, permits up-dating and extention, allows interchange of output and input information between users, and is extensively documented.

  14. A Comparison of Three Online Information Retrieval Services.

    ERIC Educational Resources Information Center

    Zais, Harriet W.

    Three firms which offer online information retrieval are compared. The firms are Lockheed Information Service, System Development Corporation and the Western Research Application Center. Comparison tables provide information such as hours accessible, coverage, file update, search elements and cost figures for 15 data bases. In addition, general…

  15. Key science issues in the central and eastern United States for the next version of the USGS National Seismic Hazard Maps

    USGS Publications Warehouse

    Peterson, M.D.; Mueller, C.S.

    2011-01-01

    The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.

  16. LSD (Landing System Development) Impact Simulation

    NASA Astrophysics Data System (ADS)

    Ullio, R.; Riva, N.; Pellegrino, P.; Deloo, P.

    2012-07-01

    In the frame of the Exploration Programs, a soft landing on the planet surface is foreseen. To ensure a successful final landing phase, a landing system by using leg tripod design landing legs with adequate crushable damping system was selected, capable of absorbing the residual velocities (vertical, horizontal and angular) at touch- down, insuring stability. TAS-I developed a numerical non linear dynamic methodology for the landing impact simulation of the Lander system by using a commercial explicit finite element analysis code (i.e. Altair RADIOSS). In this paper the most significant FE modeling approaches and results of the analytical simulation of landing impact are reported, especially with respect to the definition of leg dimensioning loads and the design update of selected parts (if necessary).

  17. Age differences in outdated information processing during news reports reading.

    PubMed

    Maury, Pascale; Besse, Florence; Martin, Sophie

    2010-10-01

    In two experiments, the authors explored whether there are any age differences associated with the ability to process outdated information during news reports comprehension. Younger and older participants (mean age: 70 years old) read passages in which a cause was first said to be responsible for the occurrence of a news event. New elements emerged from the investigation in progress and revealed that the original cause was incorrect. Inference response times indicated that older adults more than younger ones took advantage of an alternative cause mentioned in the text to put the outdated information in the background, whereas younger readers probably kept both causes activated. The research tested the concepts involved with age differences in updating situation model.

  18. TACT: A Set of MSC/PATRAN- and MSC/NASTRAN- based Modal Correlation Tools

    NASA Technical Reports Server (NTRS)

    Marlowe, Jill M.; Dixon, Genevieve D.

    1998-01-01

    This paper describes the functionality and demonstrates the utility of the Test Analysis Correlation Tools (TACT), a suite of MSC/PATRAN Command Language (PCL) tools which automate the process of correlating finite element models to modal survey test data. The initial release of TACT provides a basic yet complete set of tools for performing correlation totally inside the PATRAN/NASTRAN environment. Features include a step-by-step menu structure, pre-test accelerometer set evaluation and selection, analysis and test result export/import in Universal File Format, calculation of frequency percent difference and cross-orthogonality correlation results using NASTRAN, creation and manipulation of mode pairs, and five different ways of viewing synchronized animations of analysis and test modal results. For the PATRAN-based analyst, TACT eliminates the repetitive, time-consuming and error-prone steps associated with transferring finite element data to a third-party modal correlation package, which allows the analyst to spend more time on the more challenging task of model updating. The usefulness of this software is presented using a case history, the correlation for a NASA Langley Research Center (LaRC) low aspect ratio research wind tunnel model. To demonstrate the improvements that TACT offers the MSC/PATRAN- and MSC/DIASTRAN- based structural analysis community, a comparison of the modal correlation process using TACT within PATRAN versus external third-party modal correlation packages is presented.

  19. A biokinetic model for systemic technetium in adult humans

    DOE PAGES

    Leggett, Richard Wayne; Giussani, Augusto

    2015-04-10

    The International Commission on Radiological Protection (ICRP) currently is updating its biokinetic and dosimetric models for internally deposited radionuclides. Technetium (Tc), the lightest element that exists only in radioactive form, has two important isotopes from the standpoint of potential risk to humans: the long-lived isotope 99Tm(T 1/2=2.1x10 5 y) is present in high concentration in nuclear waste, and the short-lived isotope 99mTc (T 1/2=6.02 h) is the most commonly used radionuclide in diagnostic nuclear medicine. This paper reviews data on the biological behavior of technetium and proposes a biokinetic model for systemic technetium in the adult human body formore » use in radiation protection. Compared with the ICRP s current occupational model for systemic technetium, the proposed model provides a more realistic description of the paths of movement of technetium in the body; provides greater consistency with experimental and medical data; and, for most radiosensitive organs, yields substantially different estimates of cumulative activity (total radioactive decays within the organ) following uptake of 99Tm or 99mTc to blood.« less

  20. SysML model of exoplanet archive functionality and activities

    NASA Astrophysics Data System (ADS)

    Ramirez, Solange

    2016-08-01

    The NASA Exoplanet Archive is an online service that serves data and information on exoplanets and their host stars to help astronomical research related to search for and characterization of extra-solar planetary systems. In order to provide the most up to date data sets to the users, the exoplanet archive performs weekly updates that include additions into the database and updates to the services as needed. These weekly updates are complex due to interfaces within the archive. I will be presenting a SysML model that helps us perform these update activities in a weekly basis.

  1. Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2016-10-01

    This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.

  2. High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System

    PubMed Central

    Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram

    2014-01-01

    We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second. PMID:24891848

  3. High-Speed GPU-Based Fully Three-Dimensional Diffuse Optical Tomographic System.

    PubMed

    Saikia, Manob Jyoti; Kanhirodan, Rajan; Mohan Vasu, Ram

    2014-01-01

    We have developed a graphics processor unit (GPU-) based high-speed fully 3D system for diffuse optical tomography (DOT). The reduction in execution time of 3D DOT algorithm, a severely ill-posed problem, is made possible through the use of (1) an algorithmic improvement that uses Broyden approach for updating the Jacobian matrix and thereby updating the parameter matrix and (2) the multinode multithreaded GPU and CUDA (Compute Unified Device Architecture) software architecture. Two different GPU implementations of DOT programs are developed in this study: (1) conventional C language program augmented by GPU CUDA and CULA routines (C GPU), (2) MATLAB program supported by MATLAB parallel computing toolkit for GPU (MATLAB GPU). The computation time of the algorithm on host CPU and the GPU system is presented for C and Matlab implementations. The forward computation uses finite element method (FEM) and the problem domain is discretized into 14610, 30823, and 66514 tetrahedral elements. The reconstruction time, so achieved for one iteration of the DOT reconstruction for 14610 elements, is 0.52 seconds for a C based GPU program for 2-plane measurements. The corresponding MATLAB based GPU program took 0.86 seconds. The maximum number of reconstructed frames so achieved is 2 frames per second.

  4. Electronic Education System Model-2

    ERIC Educational Resources Information Center

    Güllü, Fatih; Kuusik, Rein; Laanpere, Mart

    2015-01-01

    In this study we presented new EES Model-2 extended from EES model for more productive implementation in e-learning process design and modelling in higher education. The most updates were related to uppermost instructional layer. We updated learning processes object of the layer for adaptation of educational process for young and old people,…

  5. Agile Implementation: A Blueprint for Implementing Evidence-Based Healthcare Solutions.

    PubMed

    Boustani, Malaz; Alder, Catherine A; Solid, Craig A

    2018-03-07

    To describe the essential components of an Agile Implementation (AI) process, which rapidly and effectively implements evidence-based healthcare solutions, and present a case study demonstrating its utility. Case demonstration study. Integrated, safety net healthcare delivery system in Indianapolis. Interdisciplinary team of clinicians and administrators. Reduction in dementia symptoms and caregiver burden; inpatient and outpatient care expenditures. Implementation scientists were able to implement a collaborative care model for dementia care and sustain it for more than 9 years. The model was implemented and sustained by using the elements of the AI process: proactive surveillance and confirmation of clinical opportunities, selection of the right evidence-based healthcare solution, localization (i.e., tailoring to the local environment) of the selected solution, development of an evaluation plan and performance feedback loop, development of a minimally standardized operation manual, and updating such manual annually. The AI process provides an effective model to implement and sustain evidence-based healthcare solutions. © 2018, Copyright the Authors Journal compilation © 2018, The American Geriatrics Society.

  6. Astrobiological complexity with probabilistic cellular automata.

    PubMed

    Vukotić, Branislav; Ćirković, Milan M

    2012-08-01

    The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.

  7. Capital update factor: a new era approaches.

    PubMed

    Grimaldi, P L

    1993-02-01

    The Health Care Financing Administration (HCFA) has constructed a preliminary model of a new capital update method which is consistent with the framework being developed to refine the update method for PPS operating costs. HCFA's eventual goal is to develop a single update framework for operating and capital costs. Initial results suggest that adopting the new capital update method would reduce capital payments substantially, which might intensify creditor's concerns about extending loans to hospitals.

  8. Proposed reporting model update creates dialogue between FASB and not-for-profits.

    PubMed

    Mosrie, Norman C

    2016-04-01

    Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.

  9. NODA for EPA's Updated Ozone Transport Modeling

    EPA Pesticide Factsheets

    Find EPA's NODA for the Updated Ozone Transport Modeling Data for the 2008 Ozone National Ambient Air Quality Standard (NAAQS) along with the ExitExtension of Public Comment Period on CSAPR for the 2008 NAAQS.

  10. Assessment of biomass burning emissions and their impacts on urban and regional PM2.5: a Georgia case study.

    PubMed

    Tian, Di; Hu, Yongtao; Wang, Yuhang; Boylan, James W; Zheng, Mei; Russell, Armistead G

    2009-01-15

    Biomass burning is a major and growing contributor to particulate matter with an aerodynamic diameter less than 2.5 microm (PM2.5). Such impacts (especially individual impacts from each burning source) are quantified using the Community Multiscale Air Quality (CMAQ) Model, a chemical transport model (CTM). Given the sensitivity of CTM results to uncertain emission inputs, simulations were conducted using three biomass burning inventories. Shortcomings in the burning emissions were also evaluated by comparing simulations with observations and results from a receptor model. Model performance improved significantly with the updated emissions and speciation profiles based on recent measurements for biomass burning: mean fractional bias is reduced from 22% to 4% for elemental carbon and from 18% to 12% for organic matter; mean fractional error is reduced from 59% to 50% for elemental carbon and from 55% to 49% for organic matter. Quantified impacts of biomass burning on PM2.5 during January, March, May, and July 2002 are 3.0, 5.1, 0.8, and 0.3 microg m(-3) domainwide on average, with more than 80% of such impacts being from primary emissions. Impacts of prescribed burning dominate biomass burning impacts, contributing about 55% and 80% of PM2.5 in January and March, respectively, followed by land clearing and agriculture field burning. Significant impacts of wildfires in May and residential wood combustion in fireplaces and woodstoves in January are also found.

  11. Maintaining tumor targeting accuracy in real-time motion compensation systems for respiration-induced tumor motion

    PubMed Central

    Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.

    2013-01-01

    Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413

  12. Update to the USDA-ARS fixed-wing spray nozzle models

    USDA-ARS?s Scientific Manuscript database

    The current USDA ARS Aerial Spray Nozzle Models were updated to reflect both new standardized measurement methods and systems, as well as, to increase operational spray pressure, aircraft airspeed and nozzle orientation angle limits. The new models were developed using both Central Composite Design...

  13. Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715

    2014-11-28

    To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less

  14. Determination of replicate composite bone material properties using modal analysis.

    PubMed

    Leuridan, Steven; Goossens, Quentin; Pastrav, Leonard; Roosen, Jorg; Mulier, Michiel; Denis, Kathleen; Desmet, Wim; Sloten, Jos Vander

    2017-02-01

    Replicate composite bones are used extensively for in vitro testing of new orthopedic devices. Contrary to tests with cadaveric bone material, which inherently exhibits large variability, they offer a standardized alternative with limited variability. Accurate knowledge of the composite's material properties is important when interpreting in vitro test results and when using them in FE models of biomechanical constructs. The cortical bone analogue material properties of three different fourth-generation composite bone models were determined by updating FE bone models using experimental and numerical modal analyses results. The influence of the cortical bone analogue material model (isotropic or transversely isotropic) and the inter- and intra-specimen variability were assessed. Isotropic cortical bone analogue material models failed to represent the experimental behavior in a satisfactory way even after updating the elastic material constants. When transversely isotropic material models were used, the updating procedure resulted in a reduction of the longitudinal Young's modulus from 16.00GPa before updating to an average of 13.96 GPa after updating. The shear modulus was increased from 3.30GPa to an average value of 3.92GPa. The transverse Young's modulus was lowered from an initial value of 10.00GPa to 9.89GPa. Low inter- and intra-specimen variability was found. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Update on Antimicrobial Resistance in Clostridium difficile: Resistance Mechanisms and Antimicrobial Susceptibility Testing

    PubMed Central

    Peng, Zhong; Kim, Hyeun Bum; Stratton, Charles W.; Wu, Bin

    2017-01-01

    ABSTRACT Oral antibiotics such as metronidazole, vancomycin and fidaxomicin are therapies of choice for Clostridium difficile infection. Several important mechanisms for C. difficile antibiotic resistance have been described, including the acquisition of antibiotic resistance genes via the transfer of mobile genetic elements, selective pressure in vivo resulting in gene mutations, altered expression of redox-active proteins, iron metabolism, and DNA repair, as well as via biofilm formation. This update summarizes new information published since 2010 on phenotypic and genotypic resistance mechanisms in C. difficile and addresses susceptibility test methods and other strategies to counter antibiotic resistance of C. difficile. PMID:28404671

  16. Multi-type sensor placement and response reconstruction for building structures: Experimental investigations

    NASA Astrophysics Data System (ADS)

    Hu, Rong-Pan; Xu, You-Lin; Zhan, Sheng

    2018-01-01

    Estimation of lateral displacement and acceleration responses is essential to assess safety and serviceability of high-rise buildings under dynamic loadings including earthquake excitations. However, the measurement information from the limited number of sensors installed in a building structure is often insufficient for the complete structural performance assessment. An integrated multi-type sensor placement and response reconstruction method has thus been proposed by the authors to tackle this problem. To validate the feasibility and effectiveness of the proposed method, an experimental investigation using a cantilever beam with multi-type sensors is performed and reported in this paper. The experimental setup is first introduced. The finite element modelling and model updating of the cantilever beam are then performed. The optimal sensor placement for the best response reconstruction is determined by the proposed method based on the updated FE model of the beam. After the sensors are installed on the physical cantilever beam, a number of experiments are carried out. The responses at key locations are reconstructed and compared with the measured ones. The reconstructed responses achieve a good match with the measured ones, manifesting the feasibility and effectiveness of the proposed method. Besides, the proposed method is also examined for the cases of different excitations and unknown excitation, and the results prove the proposed method to be robust and effective. The superiority of the optimized sensor placement scheme is finally demonstrated through comparison with two other different sensor placement schemes: the accelerometer-only scheme and non-optimal sensor placement scheme. The proposed method can be applied to high-rise buildings for seismic performance assessment.

  17. Characteristics and habitat of deep vs. shallow slow slip events

    NASA Astrophysics Data System (ADS)

    Wipperfurth, S. A.; Sramek, O.; Roskovec, B.; Mantovani, F.; McDonough, W. F.

    2016-12-01

    Models integrating geophysics and geochemistry allow for characterization of the Earth's heat budget and geochemical evolution. Global lithospheric geophysical models are now constrained by surface and body wave data and are classified into several unique tectonic types. Global lithospheric geochemical models have evolved from petrological characterization of layers to a combination of petrologic and seismic constraints. Because of these advances regarding our knowledge of the lithosphere, it is necessary to create an updated chemical and physical reference model. We are developing a global lithospheric reference model based on LITHO1.0 (segmented into 1°lon x 1°lat x 9-layers) and seismological-geochemical relationships. Uncertainty assignments and correlations are assessed for its physical attributes, including layer thickness, Vp and Vs, and density. This approach yields uncertainties for the masses of the crust and lithospheric mantle. Heat producing element abundances (HPE: U, Th, and K) are ascribed to each volume element. These chemical attributes are based upon the composition of subducting sediment (sediment layers), composition of surface rocks (upper crust), a combination of petrologic and seismic correlations (middle and lower crust), and a compilation of xenolith data (lithospheric mantle). The HPE abundances are correlated within each voxel, but not vertically between layers. Efforts to provide correlation of abundances horizontally between each voxel are discussed. These models are used further to critically evaluate the bulk lithosphere heat production in the continents and the oceans. Cross-checks between our model and results from: 1) heat flux (Artemieva, 2006; Davies, 2013; Cammarano and Guerri, 2017), 2) gravity (Reguzzoni and Sampietro, 2015), and 3) geochemical and petrological models (Rudnick and Gao, 2014; Hacker et al. 2015) are performed.

  18. Interactive Physical Simulation of Catheter Motion within Mayor Vessel Structures and Cavities for ASD/VSD Treatment

    NASA Astrophysics Data System (ADS)

    Becherer, Nico; Hesser, Jürgen; Kornmesser, Ulrike; Schranz, Dietmar; Männer, Reinhard

    2007-03-01

    Simulation systems are becoming increasingly essential in medical education. Hereby, capturing the physical behaviour of the real world requires a sophisticated modelling of instruments within the virtual environment. Most models currently used are not capable of user interactive simulations due to the computation of the complex underlying analytical equations. Alternatives are often based on simplifying mass-spring systems, being able to deliver high update rates that come at the cost of less realistic motion. In addition, most techniques are limited to narrow and tubular vessel structures or restrict shape alterations to two degrees of freedom, not allowing instrument deformations like torsion. In contrast, our approach combines high update rates with highly realistic motion and can in addition be used with respect to arbitrary structures like vessels or cavities (e.g. atrium, ventricle) without limiting the degrees of freedom. Based on energy minimization, bending energies and vessel structures are considered as linear elastic elements; energies are evaluated at regularly spaced points on the instrument, while the distance of the points is fixed, i.e. we simulate an articulated structure of joints with fixed connections between them. Arbitrary tissue structures are modeled through adaptive distance fields and are connected by nodes via an undirected graph system. The instrument points are linked to nodes by a system of rules. Energy minimization uses a Quasi Newton method without preconditioning and, hereby, gradients are estimated using a combination of analytical and numerical terms. Results show a high quality in motion simulation when compared to a phantom model. The approach is also robust and fast. Simulating an instrument with 100 joints runs at 100 Hz on a 3 GHz PC.

  19. The Influence of Recollection and Familiarity in the Formation and Updating of Associative Representations

    ERIC Educational Resources Information Center

    Ozubko, Jason D.; Moscovitch, Morris; Winocur, Gordon

    2017-01-01

    Prior representations affect future learning. Little is known, however, about the effects of recollective or familiarity-based representations on such learning. We investigate the ability to reuse or reassociate elements from recollection- and familiarity-based associations to form new associations. Past neuropsychological research suggests that…

  20. 47 CFR 51.321 - Methods of obtaining interconnection and access to unbundled elements under section 251 of the Act.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... modifications in the use of the space since the last report. This report must also include measures that the... site, indicating all premises that are full, and must update such a document within ten days of the...

  1. 47 CFR 51.321 - Methods of obtaining interconnection and access to unbundled elements under section 251 of the Act.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... modifications in the use of the space since the last report. This report must also include measures that the... site, indicating all premises that are full, and must update such a document within ten days of the...

  2. 47 CFR 51.321 - Methods of obtaining interconnection and access to unbundled elements under section 251 of the Act.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... modifications in the use of the space since the last report. This report must also include measures that the... site, indicating all premises that are full, and must update such a document within ten days of the...

  3. 47 CFR 51.321 - Methods of obtaining interconnection and access to unbundled elements under section 251 of the Act.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... modifications in the use of the space since the last report. This report must also include measures that the... site, indicating all premises that are full, and must update such a document within ten days of the...

  4. BOOK REVIEW: SOLUTE MOVEMENT IN THE RHIZOSPHERE BY TINKEY AND NYE

    EPA Science Inventory

    After 23 years, Tinker and Nye have published an updated version of their earlier book titled "Solute Movement in the Soil-Root System" (University of California Press, Berkeley, California, 1977). The book contains many of the same elements that made the 1977 publication so use...

  5. Instructional authoring by direct manipulation of simulations: Exploratory applications of RAPIDS. RAPIDS 2 authoring manual

    NASA Technical Reports Server (NTRS)

    1990-01-01

    RAPIDS II is a simulation-based intelligent tutoring system environment. It is a system for producing computer-based training courses that are built on the foundation of graphical simulations. RAPIDS II simulations can be animated and they can have continuously updating elements.

  6. 76 FR 31416 - Federal Acquisition Regulation; Technical Amendments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-31

    ... ADMINISTRATION 48 CFR Parts 52 and 53 [FAC 2005-52; Item VI; Docket 2011-0078; Sequence 2] Federal Acquisition... publication schedules. Please cite FAC 2005-52, Technical Amendments. SUPPLEMENTARY INFORMATION: In order to update certain elements in 48 CFR parts 52 and 53, this document makes editorial changes to the Federal...

  7. Psychology Practice: Design for Tomorrow

    ERIC Educational Resources Information Center

    Goodheart, Carol D.

    2011-01-01

    This article offers a blueprint for modernizing the delivery of high-quality behavioral health care and for improving access to care by a public sorely in need of psychological services. The blueprint brings together disparate elements of psychology practice into a more unified structure, an updated house, based upon advances in the essential…

  8. Aging Water Infrastructure Research Program Update: Innovation & Research for the 21st Century

    EPA Science Inventory

    This slide presentation summarizes key elements of the EOA, Office of Research and Development’s (ORD) Aging Water Infrastructure (AWI)) Research program. An overview of the national problems posed by aging water infrastructure is followed by a brief description of EPA’s overall...

  9. 42 CFR 51c.303 - Project elements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... accepted accounting principles, be considered income and expense items; (2) Provides for a capital... principles, be considered capital items; (3) Provides for plan review and updating at least annually; and (4... grant funds for the operation of a prepaid health care plan also must provide: (1) A marketing and...

  10. 78 FR 35078 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-11

    ... circumstances where dissemination may mislead or confuse investors and other market participants. In addition... data elements are updated.\\12\\ In disseminated data, market participants will cross-reference the RDID... are rounded or truncated. The dissemination protocol to provide an RDID that market participants will...

  11. A Business Educator's Guide to Transitioning to a Digital Curriculum

    ERIC Educational Resources Information Center

    Roberts, Scott D.; Rains, Russell E.; Perry, Gregory E.

    2012-01-01

    The authors, representing three key digital media business disciplines, present a case for how business curriculum could be updated to include a strong digital element without recreating the entire business school enterprise or spending millions on new faculty and technology. The three key disciplines are technology, law, and marketing.

  12. 29 CFR 1960.69 - Retention and updating of old forms.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....69 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) BASIC PROGRAM ELEMENTS FOR FEDERAL EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH... continue to provide access to the data as though these forms were the OSHA Form 300 Log and Form 301...

  13. 29 CFR 1960.69 - Retention and updating of old forms.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....69 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) BASIC PROGRAM ELEMENTS FOR FEDERAL EMPLOYEE OCCUPATIONAL SAFETY AND HEALTH... continue to provide access to the data as though these forms were the OSHA Form 300 Log and Form 301...

  14. Teacher Recruitment (Part 1 of a series). Spotlight: Updating Our Agendas.

    ERIC Educational Resources Information Center

    Smith, Cheryl

    2002-01-01

    Describes the teacher shortage and details characteristics of the current generation of potential teachers for private schools, including their work-to-live perspective, independence, and reliance on computers and communication technologies. Asserts that community outreach should be an essential element of recruitment efforts. Outlines aspects of…

  15. Quality Systems Manual (QSM) Version 5: Update

    DTIC Science & Technology

    2012-03-01

    Rei Mao; Charles Stoner  Air Force: John (Seb) Gillette  DOE: Joe Pardue; Todd Hardt  Contractor: Alyssa Wingard Questions ??? We have answers* Do...labs must be approved by appropriate DOE representative • Outsourced QS elements (such as data review) must comply with the standard and are subject to

  16. The AFIS tree growth model for updating annual forest inventories in Minnesota

    Treesearch

    Margaret R. Holdaway

    2000-01-01

    As the Forest Service moves towards annual inventories, states may use model predictions of growth to update unmeasured plots. A tree growth model (AFIS) based on the scaled Weibull function and using the average-adjusted model form is presented. Annual diameter growth for four species was modeled using undisturbed plots from Minnesota's Aspen-Birch and Northern...

  17. Effective temperatures of red giants in the APOKASC catalogue and the mixing length calibration in stellar models

    NASA Astrophysics Data System (ADS)

    Salaris, M.; Cassisi, S.; Schiavon, R. P.; Pietrinferni, A.

    2018-04-01

    Red giants in the updated APOGEE-Kepler catalogue, with estimates of mass, chemical composition, surface gravity and effective temperature, have recently challenged stellar models computed under the standard assumption of solar calibrated mixing length. In this work, we critically reanalyse this sample of red giants, adopting our own stellar model calculations. Contrary to previous results, we find that the disagreement between the Teff scale of red giants and models with solar calibrated mixing length disappears when considering our models and the APOGEE-Kepler stars with scaled solar metal distribution. However, a discrepancy shows up when α-enhanced stars are included in the sample. We have found that assuming mass, chemical composition and effective temperature scale of the APOGEE-Kepler catalogue, stellar models generally underpredict the change of temperature of red giants caused by α-element enhancements at fixed [Fe/H]. A second important conclusion is that the choice of the outer boundary conditions employed in model calculations is critical. Effective temperature differences (metallicity dependent) between models with solar calibrated mixing length and observations appear for some choices of the boundary conditions, but this is not a general result.

  18. Tolerance to cadmium in plants: the special case of hyperaccumulators.

    PubMed

    Verbruggen, Nathalie; Juraniec, Michal; Baliardini, Cecilia; Meyer, Claire-Lise

    2013-08-01

    On sols highly polluted by trace metallic elements the majority of plant species are excluders, limiting the entry and the root to shoot translocation of trace metals. However a rare class of plants called hyperaccumulators possess remarkable adaptation because those plants combine extremely high tolerance degrees and foliar accumulation of trace elements. Hyperaccumulators have recently gained considerable interest, because of their potential use in phytoremediation, phytomining and biofortification. On a more fundamental point of view hyperaccumulators of trace metals are case studies to understand metal homeostasis and detoxification mechanisms. Hyperaccumulation of trace metals usually depends on the enhancement of at least four processes, which are the absorption from the soil, the loading in the xylem in the roots and the unloading from the xylem in the leaves and the detoxification in the shoot. Cadmium is one of the most toxic trace metallic elements for living organisms and its accumulation in the environment is recognized as a worldwide concern. To date, only nine species have been recognized as Cd hyperaccumulators that is to say able to tolerate and accumulate more than 0.01 % Cd in shoot dry biomass. Among these species, four belong to the Brassicaceae family with Arabidopsis halleri and Noccaea caerulescens being considered as models. An update of our knowledge on the evolution of hyperaccumulators will be presented here.

  19. 20 Meter Solar Sail Analysis and Correlation

    NASA Technical Reports Server (NTRS)

    Taleghani, B. K.; Lively, P. S.; Banik, J.; Murphy, D. M.; Trautt, T. A.

    2005-01-01

    This paper describes finite element analyses and correlation studies to predict deformations and vibration modes/frequencies of a 20-meter solar sail system developed by ATK Space Systems. Under the programmatic leadership of NASA Marshall Space Flight Center's In-Space Propulsion activity, the 20-meter solar sail program objectives were to verify the design, to assess structural responses of the sail system, to implement lessons learned from a previous 10-meter quadrant system analysis and test program, and to mature solar sail technology to a technology readiness level (TRL) of 5. For this 20 meter sail system, static and ground vibration tests were conducted in NASA Glenn Research Center's 100 meter diameter vacuum chamber at Plum Brook station. Prior to testing, a preliminary analysis was performed to evaluate test conditions and to determine sensor and actuator locations. After testing was completed, an analysis of each test configuration was performed. Post-test model refinements included updated properties to account for the mass of sensors, wiring, and other components used for testing. This paper describes the development of finite element models (FEM) for sail membranes and masts in each of four quadrants at both the component and system levels, as well as an optimization procedure for the static test/analyses correlation.

  20. NRL/VOA Modifications to IONCAP as of 12 July 1988

    DTIC Science & Technology

    1989-08-02

    suitable for wide-area coverage studies), to incorporate a newer noise model , to improve the accuracy of some calculations, to correct a few...with IONANT ............................................................... 13 C. Incorporation of an Updated Noise Model into IONCAP...LISTINGS OF FOUR IONCAP SUBROUTINES SUPPORTING THE UPDATED NOISE MODEL ................................................................... 42 VI. LISTING

  1. Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7

    EPA Science Inventory

    This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...

  2. Orion Absolute Navigation System Progress and Challenge

    NASA Technical Reports Server (NTRS)

    Holt, Greg N.; D'Souza, Christopher

    2012-01-01

    The absolute navigation design of NASA's Orion vehicle is described. It has undergone several iterations and modifications since its inception, and continues as a work-in-progress. This paper seeks to benchmark the current state of the design and some of the rationale and analysis behind it. There are specific challenges to address when preparing a timely and effective design for the Exploration Flight Test (EFT-1), while still looking ahead and providing software extensibility for future exploration missions. The primary onboard measurements in a Near-Earth or Mid-Earth environment consist of GPS pseudo-range and delta-range, but for future explorations missions the use of star-tracker and optical navigation sources need to be considered. Discussions are presented for state size and composition, processing techniques, and consider states. A presentation is given for the processing technique using the computationally stable and robust UDU formulation with an Agee-Turner Rank-One update. This allows for computational savings when dealing with many parameters which are modeled as slowly varying Gauss-Markov processes. Preliminary analysis shows up to a 50% reduction in computation versus a more traditional formulation. Several state elements are discussed and evaluated, including position, velocity, attitude, clock bias/drift, and GPS measurement biases in addition to bias, scale factor, misalignment, and non-orthogonalities of the accelerometers and gyroscopes. Another consideration is the initialization of the EKF in various scenarios. Scenarios such as single-event upset, ground command, and cold start are discussed as are strategies for whole and partial state updates as well as covariance considerations. Strategies are given for dealing with latent measurements and high-rate propagation using multi-rate architecture. The details of the rate groups and the data ow between the elements is discussed and evaluated.

  3. Enumeration and extension of non-equivalent deterministic update schedules in Boolean networks.

    PubMed

    Palma, Eduardo; Salinas, Lilian; Aracena, Julio

    2016-03-01

    Boolean networks (BNs) are commonly used to model genetic regulatory networks (GRNs). Due to the sensibility of the dynamical behavior to changes in the updating scheme (order in which the nodes of a network update their state values), it is increasingly common to use different updating rules in the modeling of GRNs to better capture an observed biological phenomenon and thus to obtain more realistic models.In Aracena et al. equivalence classes of deterministic update schedules in BNs, that yield exactly the same dynamical behavior of the network, were defined according to a certain label function on the arcs of the interaction digraph defined for each scheme. Thus, the interaction digraph so labeled (update digraphs) encode the non-equivalent schemes. We address the problem of enumerating all non-equivalent deterministic update schedules of a given BN. First, we show that it is an intractable problem in general. To solve it, we first construct an algorithm that determines the set of update digraphs of a BN. For that, we use divide and conquer methodology based on the structural characteristics of the interaction digraph. Next, for each update digraph we determine a scheme associated. This algorithm also works in the case where there is a partial knowledge about the relative order of the updating of the states of the nodes. We exhibit some examples of how the algorithm works on some GRNs published in the literature. An executable file of the UpdateLabel algorithm made in Java and the files with the outputs of the algorithms used with the GRNs are available at: www.inf.udec.cl/ ∼lilian/UDE/ CONTACT: lilisalinas@udec.cl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Single-Trial Event-Related Potential Correlates of Belief Updating

    PubMed Central

    Murawski, Carsten; Bode, Stefan

    2015-01-01

    Abstract Belief updating—the process by which an agent alters an internal model of its environment—is a core function of the CNS. Recent theory has proposed broad principles by which belief updating might operate, but more precise details of its implementation in the human brain remain unclear. In order to address this question, we studied how two components of the human event-related potential encoded different aspects of belief updating. Participants completed a novel perceptual learning task while electroencephalography was recorded. Participants learned the mapping between the contrast of a dynamic visual stimulus and a monetary reward and updated their beliefs about a target contrast on each trial. A Bayesian computational model was formulated to estimate belief states at each trial and was used to quantify the following two variables: belief update size and belief uncertainty. Robust single-trial regression was used to assess how these model-derived variables were related to the amplitudes of the P3 and the stimulus-preceding negativity (SPN), respectively. Results showed a positive relationship between belief update size and P3 amplitude at one fronto-central electrode, and a negative relationship between SPN amplitude and belief uncertainty at a left central and a right parietal electrode. These results provide evidence that belief update size and belief uncertainty have distinct neural signatures that can be tracked in single trials in specific ERP components. This, in turn, provides evidence that the cognitive mechanisms underlying belief updating in humans can be described well within a Bayesian framework. PMID:26473170

  5. Overview and Evaluation of the Community Multiscale Air Quality (CMAQ) Modeling System Version 5.2

    EPA Science Inventory

    A new version of the Community Multiscale Air Quality (CMAQ) model, version 5.2 (CMAQv5.2), is currently being developed, with a planned release date in 2017. The new model includes numerous updates from the previous version of the model (CMAQv5.1). Specific updates include a new...

  6. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements

    PubMed Central

    Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas

    2016-01-01

    In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452

  7. Cardiac Arrest and Cardiopulmonary Resuscitation Outcome Reports: Update of the Utstein Resuscitation Registry Templates for Out-of-Hospital Cardiac Arrest: A Statement for Healthcare Professionals From a Task Force of the International Liaison Committee on Resuscitation (American Heart Association, European Resuscitation Council, Australian and New Zealand Council on Resuscitation, Heart and Stroke Foundation of Canada, InterAmerican Heart Foundation, Resuscitation Council of Southern Africa, Resuscitation Council of Asia); and the American Heart Association Emergency Cardiovascular Care Committee and the Council on Cardiopulmonary, Critical Care, Perioperative and Resuscitation.

    PubMed

    Perkins, Gavin D; Jacobs, Ian G; Nadkarni, Vinay M; Berg, Robert A; Bhanji, Farhan; Biarent, Dominique; Bossaert, Leo L; Brett, Stephen J; Chamberlain, Douglas; de Caen, Allan R; Deakin, Charles D; Finn, Judith C; Gräsner, Jan-Thorsten; Hazinski, Mary Fran; Iwami, Taku; Koster, Rudolph W; Lim, Swee Han; Ma, Matthew Huei-Ming; McNally, Bryan F; Morley, Peter T; Morrison, Laurie J; Monsieurs, Koenraad G; Montgomery, William; Nichol, Graham; Okada, Kazuo; Ong, Marcus Eng Hock; Travers, Andrew H; Nolan, Jerry P

    2015-11-01

    Utstein-style guidelines contribute to improved public health internationally by providing a structured framework with which to compare emergency medical services systems. Advances in resuscitation science, new insights into important predictors of outcome from out-of-hospital cardiac arrest, and lessons learned from methodological research prompted this review and update of the 2004 Utstein guidelines. Representatives of the International Liaison Committee on Resuscitation developed an updated Utstein reporting framework iteratively by meeting face to face, by teleconference, and by Web survey during 2012 through 2014. Herein are recommendations for reporting out-of-hospital cardiac arrest. Data elements were grouped by system factors, dispatch/recognition, patient variables, resuscitation/postresuscitation processes, and outcomes. Elements were classified as core or supplemental using a modified Delphi process primarily based on respondents' assessment of the evidence-based importance of capturing those elements, tempered by the challenges to collect them. New or modified elements reflected consensus on the need to account for emergency medical services system factors, increasing availability of automated external defibrillators, data collection processes, epidemiology trends, increasing use of dispatcher-assisted cardiopulmonary resuscitation, emerging field treatments, postresuscitation care, prognostication tools, and trends in organ recovery. A standard reporting template is recommended to promote standardized reporting. This template facilitates reporting of the bystander-witnessed, shockable rhythm as a measure of emergency medical services system efficacy and all emergency medical services system-treated arrests as a measure of system effectiveness. Several additional important subgroups are identified that enable an estimate of the specific contribution of rhythm and bystander actions that are key determinants of outcome. Copyright © 2014 European Resuscitation Council and American Heart Association, Inc. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. Cardiac arrest and cardiopulmonary resuscitation outcome reports: update of the Utstein Resuscitation Registry Templates for Out-of-Hospital Cardiac Arrest: a statement for healthcare professionals from a task force of the International Liaison Committee on Resuscitation (American Heart Association, European Resuscitation Council, Australian and New Zealand Council on Resuscitation, Heart and Stroke Foundation of Canada, InterAmerican Heart Foundation, Resuscitation Council of Southern Africa, Resuscitation Council of Asia); and the American Heart Association Emergency Cardiovascular Care Committee and the Council on Cardiopulmonary, Critical Care, Perioperative and Resuscitation.

    PubMed

    Perkins, Gavin D; Jacobs, Ian G; Nadkarni, Vinay M; Berg, Robert A; Bhanji, Farhan; Biarent, Dominique; Bossaert, Leo L; Brett, Stephen J; Chamberlain, Douglas; de Caen, Allan R; Deakin, Charles D; Finn, Judith C; Gräsner, Jan-Thorsten; Hazinski, Mary Fran; Iwami, Taku; Koster, Rudolph W; Lim, Swee Han; Huei-Ming Ma, Matthew; McNally, Bryan F; Morley, Peter T; Morrison, Laurie J; Monsieurs, Koenraad G; Montgomery, William; Nichol, Graham; Okada, Kazuo; Eng Hock Ong, Marcus; Travers, Andrew H; Nolan, Jerry P

    2015-09-29

    Utstein-style guidelines contribute to improved public health internationally by providing a structured framework with which to compare emergency medical services systems. Advances in resuscitation science, new insights into important predictors of outcome from out-of-hospital cardiac arrest, and lessons learned from methodological research prompted this review and update of the 2004 Utstein guidelines. Representatives of the International Liaison Committee on Resuscitation developed an updated Utstein reporting framework iteratively by meeting face to face, by teleconference, and by Web survey during 2012 through 2014. Herein are recommendations for reporting out-of-hospital cardiac arrest. Data elements were grouped by system factors, dispatch/recognition, patient variables, resuscitation/postresuscitation processes, and outcomes. Elements were classified as core or supplemental using a modified Delphi process primarily based on respondents' assessment of the evidence-based importance of capturing those elements, tempered by the challenges to collect them. New or modified elements reflected consensus on the need to account for emergency medical services system factors, increasing availability of automated external defibrillators, data collection processes, epidemiology trends, increasing use of dispatcher-assisted cardiopulmonary resuscitation, emerging field treatments, postresuscitation care, prognostication tools, and trends in organ recovery. A standard reporting template is recommended to promote standardized reporting. This template facilitates reporting of the bystander-witnessed, shockable rhythm as a measure of emergency medical services system efficacy and all emergency medical services system-treated arrests as a measure of system effectiveness. Several additional important subgroups are identified that enable an estimate of the specific contribution of rhythm and bystander actions that are key determinants of outcome. © 2014 by the American Heart Association, Inc., and European Resuscitation Council.

  9. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2

    NASA Astrophysics Data System (ADS)

    Gantt, B.; Kelly, J. T.; Bash, J. O.

    2015-11-01

    Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the fine-mode size distribution, include sea surface temperature (SST) dependency, and reduce surf-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several coastal and national observational data sets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for coastal sites in the Bay Regional Atmospheric Chemistry Experiment (BRACE) near Tampa, Florida. Including SST dependency to the SSA emission parameterization led to increased sodium concentrations in the southeastern US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex) study period resulted in a modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This update of SSA emissions enabled a more realistic simulation of the atmospheric chemistry in coastal environments where marine air mixes with urban pollution.

  10. Black-boxing and cause-effect power

    PubMed Central

    Albantakis, Larissa; Tononi, Giulio

    2018-01-01

    Reductionism assumes that causation in the physical world occurs at the micro level, excluding the emergence of macro-level causation. We challenge this reductionist assumption by employing a principled, well-defined measure of intrinsic cause-effect power–integrated information (Φ), and showing that, according to this measure, it is possible for a macro level to “beat” the micro level. Simple systems were evaluated for Φ across different spatial and temporal scales by systematically considering all possible black boxes. These are macro elements that consist of one or more micro elements over one or more micro updates. Cause-effect power was evaluated based on the inputs and outputs of the black boxes, ignoring the internal micro elements that support their input-output function. We show how black-box elements can have more common inputs and outputs than the corresponding micro elements, revealing the emergence of high-order mechanisms and joint constraints that are not apparent at the micro level. As a consequence, a macro, black-box system can have higher Φ than its micro constituents by having more mechanisms (higher composition) that are more interconnected (higher integration). We also show that, for a given micro system, one can identify local maxima of Φ across several spatiotemporal scales. The framework is demonstrated on a simple biological system, the Boolean network model of the fission-yeast cell-cycle, for which we identify stable local maxima during the course of its simulated biological function. These local maxima correspond to macro levels of organization at which emergent cause-effect properties of physical systems come into focus, and provide a natural vantage point for scientific inquiries. PMID:29684020

  11. ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallberg, J.; Stagg, A.

    2000-10-01

    A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less

  12. Wisconsin's forest statistics, 1987: an inventory update.

    Treesearch

    W. Brad Smith; Jerold T. Hahn

    1989-01-01

    The Wisconsin 1987 inventory update, derived by using tree growth models, reports 14.7 million acres of timberland, a decline of less than 1% since 1983. This bulletin presents findings from the inventory update in tables detailing timberland area, volume, and biomass.

  13. Stabilizing Motifs in Autonomous Boolean Networks and the Yeast Cell Cycle Oscillator

    NASA Astrophysics Data System (ADS)

    Sevim, Volkan; Gong, Xinwei; Socolar, Joshua

    2009-03-01

    Synchronously updated Boolean networks are widely used to model gene regulation. Some properties of these model networks are known to be artifacts of the clocking in the update scheme. Autonomous updating is a less artificial scheme that allows one to introduce small timing perturbations and study stability of the attractors. We argue that the stabilization of a limit cycle in an autonomous Boolean network requires a combination of motifs such as feed-forward loops and auto-repressive links that can correct small fluctuations in the timing of switching events. A recently published model of the transcriptional cell-cycle oscillator in yeast contains the motifs necessary for stability under autonomous updating [1]. [1] D. A. Orlando, et al. Nature (London), 4530 (7197):0 944--947, 2008.

  14. Using Four Downscaling Techniques to Characterize Uncertainty in Updating Intensity-Duration-Frequency Curves Under Climate Change

    NASA Astrophysics Data System (ADS)

    Cook, L. M.; Samaras, C.; McGinnis, S. A.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.

  15. Analysis of Fluid Gauge Sensor for Zero or Microgravity Conditions using Finite Element Method

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar D.; Doiron, Terence a.

    2007-01-01

    In this paper the Finite Element Method (FEM) is presented for mass/volume gauging of a fluid in a tank subjected to zero or microgravity conditions. In this approach first mutual capacitances between electrodes embedded inside the tank are measured. Assuming the medium properties the mutual capacitances are also estimated using FEM approach. Using proper non-linear optimization the assumed properties are updated by minimizing the mean square error between estimated and measured capacitances values. Numerical results are presented to validate the present approach.

  16. Development of Life Support System Technologies for Human Lunar Missions

    NASA Technical Reports Server (NTRS)

    Barta, Daniel J.; Ewert, Michael K.

    2009-01-01

    With the Preliminary Design Review (PDR) for the Orion Crew Exploration Vehicle planned to be completed in 2009, Exploration Life Support (ELS), a technology development project under the National Aeronautics and Space Administration s (NASA) Exploration Technology Development Program, is focusing its efforts on needs for human lunar missions. The ELS Project s goal is to develop and mature a suite of Environmental Control and Life Support System (ECLSS) technologies for potential use on human spacecraft under development in support of U.S. Space Exploration Policy. ELS technology development is directed at three major vehicle projects within NASA s Constellation Program (CxP): the Orion Crew Exploration Vehicle (CEV), the Altair Lunar Lander and Lunar Surface Systems, including habitats and pressurized rovers. The ELS Project includes four technical elements: Atmosphere Revitalization Systems, Water Recovery Systems, Waste Management Systems and Habitation Engineering, and two cross cutting elements, Systems Integration, Modeling and Analysis, and Validation and Testing. This paper will provide an overview of the ELS Project, connectivity with its customers and an update to content within its technology development portfolio with focus on human lunar missions.

  17. Equivalent orthotropic elastic moduli identification method for laminated electrical steel sheets

    NASA Astrophysics Data System (ADS)

    Saito, Akira; Nishikawa, Yasunari; Yamasaki, Shintaro; Fujita, Kikuo; Kawamoto, Atsushi; Kuroishi, Masakatsu; Nakai, Hideo

    2016-05-01

    In this paper, a combined numerical-experimental methodology for the identification of elastic moduli of orthotropic media is presented. Special attention is given to the laminated electrical steel sheets, which are modeled as orthotropic media with nine independent engineering elastic moduli. The elastic moduli are determined specifically for use with finite element vibration analyses. We propose a three-step methodology based on a conventional nonlinear least squares fit between measured and computed natural frequencies. The methodology consists of: (1) successive augmentations of the objective function by increasing the number of modes, (2) initial condition updates, and (3) appropriate selection of the natural frequencies based on their sensitivities on the elastic moduli. Using the results of numerical experiments, it is shown that the proposed method achieves more accurate converged solution than a conventional approach. Finally, the proposed method is applied to measured natural frequencies and mode shapes of the laminated electrical steel sheets. It is shown that the method can successfully identify the orthotropic elastic moduli that can reproduce the measured natural frequencies and frequency response functions by using finite element analyses with a reasonable accuracy.

  18. Improved maize reference genome with single-molecule technologies.

    PubMed

    Jiao, Yinping; Peluso, Paul; Shi, Jinghua; Liang, Tiffany; Stitzer, Michelle C; Wang, Bo; Campbell, Michael S; Stein, Joshua C; Wei, Xuehong; Chin, Chen-Shan; Guill, Katherine; Regulski, Michael; Kumari, Sunita; Olson, Andrew; Gent, Jonathan; Schneider, Kevin L; Wolfgruber, Thomas K; May, Michael R; Springer, Nathan M; Antoniou, Eric; McCombie, W Richard; Presting, Gernot G; McMullen, Michael; Ross-Ibarra, Jeffrey; Dawe, R Kelly; Hastie, Alex; Rank, David R; Ware, Doreen

    2017-06-22

    Complete and accurate reference genomes and annotations provide fundamental tools for characterization of genetic and functional variation. These resources facilitate the determination of biological processes and support translation of research findings into improved and sustainable agricultural technologies. Many reference genomes for crop plants have been generated over the past decade, but these genomes are often fragmented and missing complex repeat regions. Here we report the assembly and annotation of a reference genome of maize, a genetic and agricultural model species, using single-molecule real-time sequencing and high-resolution optical mapping. Relative to the previous reference genome, our assembly features a 52-fold increase in contig length and notable improvements in the assembly of intergenic spaces and centromeres. Characterization of the repetitive portion of the genome revealed more than 130,000 intact transposable elements, allowing us to identify transposable element lineage expansions that are unique to maize. Gene annotations were updated using 111,000 full-length transcripts obtained by single-molecule real-time sequencing. In addition, comparative optical mapping of two other inbred maize lines revealed a prevalence of deletions in regions of low gene density and maize lineage-specific genes.

  19. Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects

    NASA Astrophysics Data System (ADS)

    Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander

    Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.

  20. Modeling of Rolling Element Bearing Mechanics: Computer Program Updates

    NASA Technical Reports Server (NTRS)

    Ryan, S. G.

    1997-01-01

    The Rolling Element Bearing Analysis System (REBANS) extends the capability available with traditional quasi-static bearing analysis programs by including the effects of bearing race and support flexibility. This tool was developed under contract for NASA-MSFC. The initial version delivered at the close of the contract contained several errors and exhibited numerous convergence difficulties. The program has been modified in-house at MSFC to correct the errors and greatly improve the convergence. The modifications consist of significant changes in the problem formulation and nonlinear convergence procedures. The original approach utilized sequential convergence for nested loops to achieve final convergence. This approach proved to be seriously deficient in robustness. Convergence was more the exception than the rule. The approach was changed to iterate all variables simultaneously. This approach has the advantage of using knowledge of the effect of each variable on each other variable (via the system Jacobian) when determining the incremental changes. This method has proved to be quite robust in its convergence. This technical memorandum documents the changes required for the original Theoretical Manual and User's Manual due to the new approach.

Top