Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Calibrated Blade-Element/Momentum Theory Aerodynamic Model of the MARIN Stock Wind Turbine: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goupee, A.; Kimball, R.; de Ridder, E. J.
2015-04-02
In this paper, a calibrated blade-element/momentum theory aerodynamic model of the MARIN stock wind turbine is developed and documented. The model is created using open-source software and calibrated to closely emulate experimental data obtained by the DeepCwind Consortium using a genetic algorithm optimization routine. The provided model will be useful for those interested in validating interested in validating floating wind turbine numerical simulators that rely on experiments utilizing the MARIN stock wind turbine—for example, the International Energy Agency Wind Task 30’s Offshore Code Comparison Collaboration Continued, with Correlation project.
NASA Astrophysics Data System (ADS)
Pan, S.; Liu, L.; Xu, Y. P.
2017-12-01
Abstract: In physically based distributed hydrological model, large number of parameters, representing spatial heterogeneity of watershed and various processes in hydrologic cycle, are involved. For lack of calibration module in Distributed Hydrology Soil Vegetation Model, this study developed a multi-objective calibration module using Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII) and based on parallel computing of Linux cluster for DHSVM (ɛP-DHSVM). In this study, two hydrologic key elements (i.e., runoff and evapotranspiration) are used as objectives in multi-objective calibration of model. MODIS evapotranspiration obtained by SEBAL is adopted to fill the gap of lack of observation for evapotranspiration. The results show that good performance of runoff simulation in single objective calibration cannot ensure good simulation performance of other hydrologic key elements. Self-developed ɛP-DHSVM model can make multi-objective calibration more efficiently and effectively. The running speed can be increased by more than 20-30 times via applying ɛP-DHSVM. In addition, runoff and evapotranspiration can be simulated very well simultaneously by ɛP-DHSVM, with superior values for two efficiency coefficients (0.74 for NS of runoff and 0.79 for NS of evapotranspiration, -10.5% and -8.6% for PBIAS of runoff and evapotranspiration respectively).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathews, M.A.; Bowman, H.R.; Huang, L., H.
A low radioactivity calibration facility has been constructed at the Nevada Test Site (NTS). This facility has four calibration models of natural stone that are 3 ft in diameter and 6 ft long, with a 12 in. cored borehole in the center of each model and a lead-shielded run pipe below each model. These models have been analyzed by laboratory natural gamma ray spectroscopy (NGRS) and neutron activation analysis (NAA) for their K, U, and Th content. Also, 42 other elements were analyzed in the NAA. The /sup 222/Rn emanation data were collected. Calibrating the spectral gamma tool in thismore » low radioactivity calibration facility allows the spectral gamma log to accurately aid in the recognition and mapping of subsurface stratigraphic units and alteration features associated with unusual concentrations of these radioactive elements, such as clay-rich zones.« less
Features calibration of the dynamic force transducers
NASA Astrophysics Data System (ADS)
Sc., M. Yu Prilepko D.; Lysenko, V. G.
2018-04-01
The article discusses calibration methods of dynamic forces measuring instruments. The relevance of work is dictated by need to valid definition of the dynamic forces transducers metrological characteristics taking into account their intended application. The aim of this work is choice justification of calibration method, which provides the definition dynamic forces transducers metrological characteristics under simulation operating conditions for determining suitability for using in accordance with its purpose. The following tasks are solved: the mathematical model and the main measurements equation of calibration dynamic forces transducers by load weight, the main budget uncertainty components of calibration are defined. The new method of dynamic forces transducers calibration with use the reference converter “force-deformation” based on the calibrated elastic element and measurement of his deformation by a laser interferometer is offered. The mathematical model and the main measurements equation of the offered method is constructed. It is shown that use of calibration method based on measurements by the laser interferometer of calibrated elastic element deformations allows to exclude or to considerably reduce the uncertainty budget components inherent to method of load weight.
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Singnoi, W. N.
1985-01-01
A two axis thrust measuring system was analyzed by using a finite a element computer program to determine the sensitivities of the thrust vectoring nozzle system to misalignment of the load cells and applied loads, and the stiffness of the structural members. Three models were evaluated: (1) the basic measuring element and its internal calibration load cells; (2) the basic measuring element and its external load calibration equipment; and (3) the basic measuring element, external calibration load frame and the altitude facility support structure. Alignment of calibration loads was the greatest source of error for multiaxis thrust measuring systems. Uniform increases or decreases in stiffness of the members, which might be caused by the selection of the materials, have little effect on the accuracy of the measurements. It is found that the POLO-FINITE program is a viable tool for designing and analyzing multiaxis thrust measurement systems. The response of the test stand to step inputs that might be encountered with thrust vectoring tests was determined. The dynamic analysis show a potential problem for measuring the dynamic response characteristics of thrust vectoring systems because of the inherently light damping of the test stand.
NASA Astrophysics Data System (ADS)
Krejsa, M.; Brozovsky, J.; Mikolasek, D.; Parenica, P.; Koubova, L.
2018-04-01
The paper is focused on the numerical modeling of welded steel bearing elements using commercial software system ANSYS, which is based on the finite element method - FEM. It is important to check and compare the results of FEM analysis with the results of physical verification test, in which the real behavior of the bearing element can be observed. The results of the comparison can be used for calibration of the computational model. The article deals with the physical test of steel supporting elements, whose main purpose is obtaining of material, geometry and strength characteristics of the fillet and butt welds including heat affected zone in the basic material of welded steel bearing element. The pressure test was performed during the experiment, wherein the total load value and the corresponding deformation of the specimens under the load was monitored. Obtained data were used for the calibration of numerical models of test samples and they are necessary for further stress and strain analysis of steel supporting elements.
Numerical Analysis of a Radiant Heat Flux Calibration System
NASA Technical Reports Server (NTRS)
Jiang, Shanjuan; Horn, Thomas J.; Dhir, V. K.
1998-01-01
A radiant heat flux gage calibration system exists in the Flight Loads Laboratory at NASA's Dryden Flight Research Center. This calibration system must be well understood if the heat flux gages calibrated in it are to provide useful data during radiant heating ground tests or flight tests of high speed aerospace vehicles. A part of the calibration system characterization process is to develop a numerical model of the flat plate heater element and heat flux gage, which will help identify errors due to convection, heater element erosion, and other factors. A 2-dimensional mathematical model of the gage-plate system has been developed to simulate the combined problem involving convection, radiation and mass loss by chemical reaction. A fourth order finite difference scheme is used to solve the steady state governing equations and determine the temperature distribution in the gage and plate, incident heat flux on the gage face, and flat plate erosion. Initial gage heat flux predictions from the model are found to be within 17% of experimental results.
NASA Astrophysics Data System (ADS)
Grotti, Marco; Abelmoschi, Maria Luisa; Soggia, Francesco; Tiberiade, Christian; Frache, Roberto
2000-12-01
The multivariate effects of Na, K, Mg and Ca as nitrates on the electrothermal atomisation of manganese, cadmium and iron were studied by multiple linear regression modelling. Since the models proved to efficiently predict the effects of the considered matrix elements in a wide range of concentrations, they were applied to correct the interferences occurring in the determination of trace elements in seawater after pre-concentration of the analytes. In order to obtain a statistically significant number of samples, a large volume of the certified seawater reference materials CASS-3 and NASS-3 was treated with Chelex-100 resin; then, the chelating resin was separated from the solution, divided into several sub-samples, each of them was eluted with nitric acid and analysed by electrothermal atomic absorption spectrometry (for trace element determinations) and inductively coupled plasma optical emission spectrometry (for matrix element determinations). To minimise any other systematic error besides that due to matrix effects, accuracy of the pre-concentration step and contamination levels of the procedure were checked by inductively coupled plasma mass spectrometric measurements. Analytical results obtained by applying the multiple linear regression models were compared with those obtained with other calibration methods, such as external calibration using acid-based standards, external calibration using matrix-matched standards and the analyte addition technique. Empirical models proved to efficiently reduce interferences occurring in the analysis of real samples, allowing an improvement of accuracy better than for other calibration methods.
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta, Lucas G.; Jackson, Karen E.; Polanco, Michael A.; Littell, Justin D.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber (DEA) under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. The presence of this energy absorbing device reduced the peak impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to a system-integrated finite element model of the test article developed in parallel with the test program. In preparation for the full-scale crash test, a series of sub-scale and MD-500 mass simulator tests were conducted to evaluate the impact performances of various components and subsystems, including new crush tubes and the DEA blocks. Parameters defined for the system-integrated finite element model were determined from these tests. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the full-scale crash test without the DEA. This combination of heuristic and quantitative methods identified modeling deficiencies, evaluated parameter importance, and proposed required model changes. The multidimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and copilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. One lesson learned was that this approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and pretest predictions. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, potentially reducing overall development costs.
KINEROS2-AGWA: Model Use, Calibration, and Validation
NASA Technical Reports Server (NTRS)
Goodrich, D C.; Burns, I. S.; Unkrich, C. L.; Semmens, D. J.; Guertin, D. P.; Hernandez, M.; Yatheendradas, S.; Kennedy, J. R.; Levick, L. R..
2013-01-01
KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.
KINEROS2/AGWA: Model use, calibration and validation
Goodrich, D.C.; Burns, I.S.; Unkrich, C.L.; Semmens, Darius J.; Guertin, D.P.; Hernandez, M.; Yatheendradas, S.; Kennedy, Jeffrey R.; Levick, Lainie R.
2012-01-01
KINEROS (KINematic runoff and EROSion) originated in the 1960s as a distributed event-based model that conceptualizes a watershed as a cascade of overland flow model elements that flow into trapezoidal channel model elements. KINEROS was one of the first widely available watershed models that interactively coupled a finite difference approximation of the kinematic overland flow equations to a physically based infiltration model. Development and improvement of KINEROS continued from the 1960s on a variety of projects for a range of purposes, which has resulted in a suite of KINEROS-based modeling tools. This article focuses on KINEROS2 (K2), a spatially distributed, event-based watershed rainfall-runoff and erosion model, and the companion ArcGIS-based Automated Geospatial Watershed Assessment (AGWA) tool. AGWA automates the time-consuming tasks of watershed delineation into distributed model elements and initial parameterization of these elements using commonly available, national GIS data layers. A variety of approaches have been used to calibrate and validate K2 successfully across a relatively broad range of applications (e.g., urbanization, pre- and post-fire, hillslope erosion, erosion from roads, runoff and recharge, and manure transport). The case studies presented in this article (1) compare lumped to stepwise calibration and validation of runoff and sediment at plot, hillslope, and small watershed scales; and (2) demonstrate an uncalibrated application to address relative change in watershed response to wildfire.
Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.
Itoh, Yuta; Klinker, Gudrun
2015-04-01
A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.
Shape calibration of a conformal ultrasound therapy array.
McGough, R J; Cindric, D; Samulski, T V
2001-03-01
A conformal ultrasound phased array prototype with 96 elements was recently calibrated for electronic steering and focusing in a water tank. The procedure for calibrating the shape of this 2D therapy array consists of two steps. First, a least squares triangulation algorithm determines the element coordinates from a 21 x 21 grid of time delays. The triangulation algorithm also requires temperature measurements to compensate for variations in the speed of sound. Second, a Rayleigh-Sommerfeld formulation of the acoustic radiation integral is aligned to a second grid of measured pressure amplitudes in a least squares sense. This shape calibration procedure, which is applicable to a wide variety of ultrasound phased arrays, was tested on a square array panel consisting of 7- x 7-mm elements operating at 617 kHz. The simulated fields generated by an array of 96 equivalent elements are consistent with the measured data, even in the fine structure away from the primary focus and sidelobes. These two calibration steps are sufficient for the simulation model to predict successfully the pressure field generated by this conformal ultrasound phased array prototype.
2013-08-01
in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clegg, Samuel M; Barefield, James E; Wiens, Roger C
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less
Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.
2012-01-01
Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used to satisfy crash certification requirements, thereby reducing overall development costs.
Numerical simulation of damage evolution for ductile materials and mechanical properties study
NASA Astrophysics Data System (ADS)
El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.
2015-12-01
This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.
Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner
Yu, Chengyi; Chen, Xiaobo; Xi, Juntong
2017-01-01
A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844
Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)
2016-09-17
test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model
NASA Astrophysics Data System (ADS)
Payré, V.; Fabre, C.; Cousin, A.; Sautter, V.; Wiens, R. C.; Forni, O.; Gasnault, O.; Mangold, N.; Meslin, P.-Y.; Lasue, J.; Ollila, A.; Rapin, W.; Maurice, S.; Nachon, M.; Le Deit, L.; Lanza, N.; Clegg, S.
2017-03-01
The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantifications of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. These observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.
Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent
2016-10-01
In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.
Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios
Banta, Edward R.
2014-01-01
Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.
Fragmentation modeling of a resin bonded sand
NASA Astrophysics Data System (ADS)
Hilth, William; Ryckelynck, David
2017-06-01
Cemented sands exhibit a complex mechanical behavior that can lead to sophisticated models, with numerous parameters without real physical meaning. However, using a rather simple generalized critical state bonded soil model has proven to be a relevant compromise between an easy calibration and good results. The constitutive model formulation considers a non-associated elasto-plastic formulation within the critical state framework. The calibration procedure, using standard laboratory tests, is complemented by the study of an uniaxial compression test observed by tomography. Using finite elements simulations, this test is simulated considering a non-homogeneous 3D media. The tomography of compression sample gives access to 3D displacement fields by using image correlation techniques. Unfortunately these fields have missing experimental data because of the low resolution of correlations for low displacement magnitudes. We propose a recovery method that reconstructs 3D full displacement fields and 2D boundary displacement fields. These fields are mandatory for the calibration of the constitutive parameters by using 3D finite element simulations. The proposed recovery technique is based on a singular value decomposition of available experimental data. This calibration protocol enables an accurate prediction of the fragmentation of the specimen.
Calibration of the COBE FIRAS instrument
NASA Technical Reports Server (NTRS)
Fixsen, D. J.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Hewagama, T.; Isaacman, R. B.; Jensen, K. A.; Mather, J. C.; Massa, D. L.; Meyer, S. S.
1994-01-01
The Far-Infrared Absolute Spectrophotometer (FIRAS) instrument on the Cosmic Background Explorer (COBE) satellite was designed to accurately measure the spectrum of the cosmic microwave background radiation (CMBR) in the frequency range 1-95/cm with an angular resolution of 7 deg. We describe the calibration of this instrument, including the method of obtaining calibration data, reduction of data, the instrument model, fitting the model to the calibration data, and application of the resulting model solution to sky observations. The instrument model fits well for calibration data that resemble sky condition. The method of propagating detector noise through the calibration process to yield a covariance matrix of the calibrated sky data is described. The final uncertainties are variable both in frequency and position, but for a typical calibrated sky 2.6 deg square pixel and 0.7/cm spectral element the random detector noise limit is of order of a few times 10(exp -7) ergs/sq cm/s/sr cm for 2-20/cm, and the difference between the sky and the best-fit cosmic blackbody can be measured with a gain uncertainty of less than 3%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payre, Valerie; Fabre, Cecile; Cousin, Agnes
The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantificationsmore » of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. Here, these observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.« less
Payre, Valerie; Fabre, Cecile; Cousin, Agnes; ...
2017-03-20
The Chemistry Camera (ChemCam) instrument onboard Curiosity can detect minor and trace elements such as lithium, strontium, rubidium, and barium. Their abundances can provide some insights about Mars' magmatic history and sedimentary processes. We focus on developing new quantitative models for these elements by using a new laboratory database (more than 400 samples) that displays diverse compositions that are more relevant for Gale crater than the previous ChemCam database. These models are based on univariate calibration curves. For each element, the best model is selected depending on the results obtained by using the ChemCam calibration targets onboard Curiosity. New quantificationsmore » of Li, Sr, Rb, and Ba in Gale samples have been obtained for the first 1000 Martian days. Comparing these data in alkaline and magnesian rocks with the felsic and mafic clasts from the Martian meteorite NWA7533—from approximately the same geologic period—we observe a similar behavior: Sr, Rb, and Ba are more concentrated in soluble- and incompatible-element-rich mineral phases (Si, Al, and alkali-rich). Correlations between these trace elements and potassium in materials analyzed by ChemCam reveal a strong affinity with K-bearing phases such as feldspars, K-phyllosilicates, and potentially micas in igneous and sedimentary rocks. However, lithium is found in comparable abundances in alkali-rich and magnesium-rich Gale rocks. This very soluble element can be associated with both alkali and Mg-Fe phases such as pyroxene and feldspar. Here, these observations of Li, Sr, Rb, and Ba mineralogical associations highlight their substitution with potassium and their incompatibility in magmatic melts.« less
Comparison of Test and Finite Element Analysis for Two Full-Scale Helicopter Crash Tests
NASA Technical Reports Server (NTRS)
Annett, Martin S.; Horta,Lucas G.
2011-01-01
Finite element analyses have been performed for two full-scale crash tests of an MD-500 helicopter. The first crash test was conducted to evaluate the performance of a composite deployable energy absorber under combined flight loads. In the second crash test, the energy absorber was removed to establish the baseline loads. The use of an energy absorbing device reduced the impact acceleration levels by a factor of three. Accelerations and kinematic data collected from the crash tests were compared to analytical results. Details of the full-scale crash tests and development of the system-integrated finite element model are briefly described along with direct comparisons of acceleration magnitudes and durations for the first full-scale crash test. Because load levels were significantly different between tests, models developed for the purposes of predicting the overall system response with external energy absorbers were not adequate under more severe conditions seen in the second crash test. Relative error comparisons were inadequate to guide model calibration. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used for the second full-scale crash test. The calibrated parameter set reduced 2-norm prediction error by 51% but did not improve impact shape orthogonality.
A validated approach for modeling collapse of steel structures
NASA Astrophysics Data System (ADS)
Saykin, Vitaliy Victorovich
A civil engineering structure is faced with many hazardous conditions such as blasts, earthquakes, hurricanes, tornadoes, floods, and fires during its lifetime. Even though structures are designed for credible events that can happen during a lifetime of the structure, extreme events do happen and cause catastrophic failures. Understanding the causes and effects of structural collapse is now at the core of critical areas of national need. One factor that makes studying structural collapse difficult is the lack of full-scale structural collapse experimental test results against which researchers could validate their proposed collapse modeling approaches. The goal of this work is the creation of an element deletion strategy based on fracture models for use in validated prediction of collapse of steel structures. The current work reviews the state-of-the-art of finite element deletion strategies for use in collapse modeling of structures. It is shown that current approaches to element deletion in collapse modeling do not take into account stress triaxiality in vulnerable areas of the structure, which is important for proper fracture and element deletion modeling. The report then reviews triaxiality and its role in fracture prediction. It is shown that fracture in ductile materials is a function of triaxiality. It is also shown that, depending on the triaxiality range, different fracture mechanisms are active and should be accounted for. An approach using semi-empirical fracture models as a function of triaxiality are employed. The models to determine fracture initiation, softening and subsequent finite element deletion are outlined. This procedure allows for stress-displacement softening at an integration point of a finite element in order to subsequently remove the element. This approach avoids abrupt changes in the stress that would create dynamic instabilities, thus making the results more reliable and accurate. The calibration and validation of these models are shown. The calibration is performed using a particle swarm optimization algorithm to establish accurate parameters when calibrated to circumferentially notched tensile coupons. It is shown that consistent, accurate predictions are attained using the chosen models. The variation of triaxiality in steel material during plastic hardening and softening is reported. The range of triaxiality in steel structures undergoing collapse is investigated in detail and the accuracy of the chosen finite element deletion approaches is discussed. This is done through validation of different structural components and structural frames undergoing severe fracture and collapse.
A strain-mediated corrosion model for bioabsorbable metallic stents.
Galvin, E; O'Brien, D; Cummins, C; Mac Donald, B J; Lally, C
2017-06-01
This paper presents a strain-mediated phenomenological corrosion model, based on the discrete finite element modelling method which was developed for use with the ANSYS Implicit finite element code. The corrosion model was calibrated from experimental data and used to simulate the corrosion performance of a WE43 magnesium alloy stent. The model was found to be capable of predicting the experimentally observed plastic strain-mediated mass loss profile. The non-linear plastic strain model, extrapolated from the experimental data, was also found to adequately capture the corrosion-induced reduction in the radial stiffness of the stent over time. The model developed will help direct future design efforts towards the minimisation of plastic strain during device manufacture, deployment and in-service, in order to reduce corrosion rates and prolong the mechanical integrity of magnesium devices. The need for corrosion models that explore the interaction of strain with corrosion damage has been recognised as one of the current challenges in degradable material modelling (Gastaldi et al., 2011). A finite element based plastic strain-mediated phenomenological corrosion model was developed in this work and was calibrated based on the results of the corrosion experiments. It was found to be capable of predicting the experimentally observed plastic strain-mediated mass loss profile and the corrosion-induced reduction in the radial stiffness of the stent over time. To the author's knowledge, the results presented here represent the first experimental calibration of a plastic strain-mediated corrosion model of a corroding magnesium stent. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Yang, Hao; Xu, Xiangyang; Neumann, Ingo
2014-11-19
Terrestrial laser scanning technology (TLS) is a new technique for quickly getting three-dimensional information. In this paper we research the health assessment of concrete structures with a Finite Element Method (FEM) model based on TLS. The goal focuses on the benefits of 3D TLS in the generation and calibration of FEM models, in order to build a convenient, efficient and intelligent model which can be widely used for the detection and assessment of bridges, buildings, subways and other objects. After comparing the finite element simulation with surface-based measurement data from TLS, the FEM model is determined to be acceptable with an error of less than 5%. The benefit of TLS lies mainly in the possibility of a surface-based validation of results predicted by the FEM model.
Webster, Victoria A; Nieto, Santiago G; Grosberg, Anna; Akkus, Ozan; Chiel, Hillel J; Quinn, Roger D
2016-10-01
In this study, new techniques for approximating the contractile properties of cells in biohybrid devices using Finite Element Analysis (FEA) have been investigated. Many current techniques for modeling biohybrid devices use individual cell forces to simulate the cellular contraction. However, such techniques result in long simulation runtimes. In this study we investigated the effect of the use of thermal contraction on simulation runtime. The thermal contraction model was significantly faster than models using individual cell forces, making it beneficial for rapidly designing or optimizing devices. Three techniques, Stoney׳s Approximation, a Modified Stoney׳s Approximation, and a Thermostat Model, were explored for calibrating thermal expansion/contraction parameters (TECPs) needed to simulate cellular contraction using thermal contraction. The TECP values were calibrated by using published data on the deflections of muscular thin films (MTFs). Using these techniques, TECP values that suitably approximate experimental deflections can be determined by using experimental data obtained from cardiomyocyte MTFs. Furthermore, a sensitivity analysis was performed in order to investigate the contribution of individual variables, such as elastic modulus and layer thickness, to the final calibrated TECP for each calibration technique. Additionally, the TECP values are applicable to other types of biohybrid devices. Two non-MTF models were simulated based on devices reported in the existing literature. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech
2015-01-01
Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Tensorial Calibration. 2. Second Order Tensorial Calibration.
1987-10-12
index is repeated more than once only in one side of an equation, it implies a summation over the index valid range. 12 To avoid confusion of terms...and higher order tensor, the rank can be higher than the maximum dimensionality. 13 ,ON 6 LINEAR SECOND ORDER TENSORIAL CALIBRATION MODEL From...these equations are valid only if all the elements of the diagonal matrix B3 are non-zero because its inverse (-1) must be computed. This implies that M
A dynamic ventilation model for gravity sewer networks.
Wang, Y C; Nobi, N; Nguyen, T; Vorreiter, L
2012-01-01
To implement any effective odour and corrosion control technology in the sewer network, it is imperative that the airflow through gravity sewer airspaces be quantified. This paper presents a full dynamic airflow model for gravity sewer systems. The model, which is developed using the finite element method, is a compressible air transport model. The model has been applied to the North Head Sewerage Ocean Outfall System (NSOOS) and calibrated using the air pressure and airflow data collected during October 2008. Although the calibration is focused on forced ventilation, the model can be applied to natural ventilation as well.
NASA Astrophysics Data System (ADS)
Nielsen, Roger L.; Ustunisik, Gokce; Weinsteiger, Allison B.; Tepley, Frank J.; Johnston, A. Dana; Kent, Adam J. R.
2017-09-01
Quantitative models of petrologic processes require accurate partition coefficients. Our ability to obtain accurate partition coefficients is constrained by their dependence on pressure temperature and composition, and on the experimental and analytical techniques we apply. The source and magnitude of error in experimental studies of trace element partitioning may go unrecognized if one examines only the processed published data. The most important sources of error are relict crystals, and analyses of more than one phase in the analytical volume. Because we have typically published averaged data, identification of compromised data is difficult if not impossible. We addressed this problem by examining unprocessed data from plagioclase/melt partitioning experiments, by comparing models based on that data with existing partitioning models, and evaluated the degree to which the partitioning models are dependent on the calibration data. We found that partitioning models are dependent on the calibration data in ways that result in erroneous model values, and that the error will be systematic and dependent on the value of the partition coefficient. In effect, use of different calibration datasets will result in partitioning models whose results are systematically biased, and that one can arrive at different and conflicting conclusions depending on how a model is calibrated, defeating the purpose of applying the models. Ultimately this is an experimental data problem, which can be solved if we publish individual analyses (not averages) or use a projection method wherein we use an independent compositional constraint to identify and estimate the uncontaminated composition of each phase.
AUTOMATIC CALIBRATION OF A STOCHASTIC-LAGRANGIAN TRANSPORT MODEL (SLAM)
Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...
NASA Astrophysics Data System (ADS)
Junker, Philipp; Hackl, Klaus
2016-09-01
Numerical simulations are a powerful tool to analyze the complex thermo-mechanically coupled material behavior of shape memory alloys during product engineering. The benefit of the simulations strongly depends on the quality of the underlying material model. In this contribution, we discuss a variational approach which is based solely on energetic considerations and demonstrate that unique calibration of such a model is sufficient to predict the material behavior at varying ambient temperature. In the beginning, we recall the necessary equations of the material model and explain the fundamental idea. Afterwards, we focus on the numerical implementation and provide all information that is needed for programing. Then, we show two different ways to calibrate the model and discuss the results. Furthermore, we show how this model is used during real-life industrial product engineering.
Recent modifications and calibration of the Langley low-turbulence pressure tunnel
NASA Technical Reports Server (NTRS)
Mcghee, R. J.; Beasley, W. D.; Foster, J. M.
1984-01-01
Modifications to the Langley Low-Turbulence Pressure Tunnel are presented and a calibration of the mean flow parameters in the test section is provided. Also included are the operational capability of the tunnel and typical test results for both single-element and multi-element airfoils. Modifications to the facility consisted of the following: replacement of the original cooling coils and antiturbulence screens and addition of a tunnel-shell heating system, a two dimensional model-support and force-balance system, a sidewall boundary layer control system, a remote-controlled survey apparatus, and a new data acquisition system. A calibration of the mean flow parameters in the test section was conducted over the complete operational range of the tunnel. The calibration included dynamic-pressure measurements, Mach number distributions, flow-angularity measurements, boundary-layer characteristics, and total-pressure profiles. In addition, test-section turbulence measurements made after the tunnel modifications have been included with these calibration data to show a comparison of existing turbulence levels with data obtained for the facility in 1941 with the original screen installation.
A reduced Iwan model that includes pinning for bolted joint mechanics
Brake, M. R. W.
2016-10-28
Bolted joints are prevalent in most assembled structures; however, predictive models for their behavior do not exist. Calibrated models, such as the Iwan model, are able to predict the response of a jointed structure over a range of excitations once calibrated at a nominal load. The Iwan model, though, is not widely adopted due to the high computational expense of implementation. To address this, an analytical solution of the Iwan model is derived under the hypothesis that for an arbitrary load reversal, there is a new distribution of dry friction elements, which are now stuck, that approximately resemble a scaledmore » version of the original distribution of dry friction elements. The dry friction elements internal to the Iwan model do not have a uniform set of parameters and are described by a distribution of parameters, i.e., which internal dry friction elements are stuck or slipping at a given load, that ultimately governs the behavior of the joint as it transitions from microslip to macroslip. This hypothesis allows the model to require no information from previous loading cycles. Additionally, the model is extended to include the pinning behavior inherent in a bolted joint. Modifications of the resulting framework are discussed to highlight how the constitutive model for friction can be changed (in the case of an Iwan–Stribeck formulation) or how the distribution of dry friction elements can be changed (as is the case for the Iwan plasticity model). Finally, the reduced Iwan plus pinning model is then applied to the Brake–Reuß beam in order to discuss methods to deduce model parameters from experimental data.« less
Earth Radiation Budget Experiment scanner radiometric calibration results
NASA Technical Reports Server (NTRS)
Lee, Robert B., III; Gibson, M. A.; Thomas, Susan; Meekins, Jeffrey L.; Mahan, J. R.
1990-01-01
The Earth Radiation Budget Experiment (ERBE) scanning radiometers are producing measurements of the incoming solar, earth/atmosphere-reflected solar, and earth/atmosphere-emitted radiation fields with measurement precisions and absolute accuracies, approaching 1 percent. ERBE uses thermistor bolometers as the detection elements in the narrow-field-of-view scanning radiometers. The scanning radiometers can sense radiation in the shortwave, longwave, and total broadband spectral regions of 0.2 to 5.0, 5.0 to 50.0, and 0.2 to 50.0 micrometers, respectively. Detailed models of the radiometers' response functions were developed in order to design the most suitable calibration techniques. These models guided the design of in-flight calibration procedures as well as the development and characterization of a vacuum-calibration chamber and the blackbody source which provided the absolute basis upon which the total and longwave radiometers were characterized. The flight calibration instrumentation for the narror-field-of-view scanning radiometers is presented and evaluated.
Response simulation and theoretical calibration of a dual-induction resistivity LWD tool
NASA Astrophysics Data System (ADS)
Xu, Wei; Ke, Shi-Zhen; Li, An-Zong; Chen, Peng; Zhu, Jun; Zhang, Wei
2014-03-01
In this paper, responses of a new dual-induction resistivity logging-while-drilling (LWD) tool in 3D inhomogeneous formation models are simulated by the vector finite element method (VFEM), the influences of the borehole, invaded zone, surrounding strata, and tool eccentricity are analyzed, and calibration loop parameters and calibration coefficients of the LWD tool are discussed. The results show that the tool has a greater depth of investigation than that of the existing electromagnetic propagation LWD tools and is more sensitive to azimuthal conductivity. Both deep and medium induction responses have linear relationships with the formation conductivity, considering optimal calibration loop parameters and calibration coefficients. Due to the different depths of investigation and resolution, deep induction and medium induction are affected differently by the formation model parameters, thereby having different correction factors. The simulation results can provide theoretical references for the research and interpretation of the dual-induction resistivity LWD tools.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Voyiadjis, G.
2017-12-01
Subsidence has caused significant wetland losses in coastal Louisiana due to various anthropogenic and geologic processes. Releveling data from National Geodetic Survey show that one of the governing factors in the coastal Louisiana is hydrocarbon production, which has led to the acceleration of spatial- and temporal-dependent subsidence. This work investigates the influence of hydrocarbon production on subsidence for a typical reservoir, the Valentine field in coastal Louisiana, based on finite element modeling in the framework of poroelasticity and poroplasticity. Geertsma's analytical model is first used in this work to interpret the observed subsidence, for a disc-shaped reservoir embedded in a semi-infinite homogeneous elastic medium. Based on the calibrated elastic material properties, the authors set up a 3D finite element model and validate the numerical results with Geertsma's analytical model. As the plastic deformation of a reservoir in an inhomogeneous medium plays an important role in the compaction of the reservoir and the land subsidence, the authors further adopt a modified Cam-Clay model to take account of the plastic compaction of the reservoir. The material properties in the Cam-Clay model are calibrated based on the subsidence observed in the field and that in the homogeneous elastic case. The observed trend and magnitude of subsidence in the Valentine field can be approximately reproduced through finite element modeling in both the homogeneous elastic case and the inhomogeneous plastic case, by using the calibrated material properties. The maximum compaction in the inhomogeneous plastic case is around half of that in the homogeneous elastic case, and thus the ratio of subsidence over compaction is larger in the inhomogeneous plastic case for a softer reservoir embedded in a stiffer medium.
NASA Astrophysics Data System (ADS)
Sokolowski, M.; Colegate, T.; Sutinjo, A. T.; Ung, D.; Wayth, R.; Hurley-Walker, N.; Lenc, E.; Pindor, B.; Morgan, J.; Kaplan, D. L.; Bell, M. E.; Callingham, J. R.; Dwarakanath, K. S.; For, Bi-Qing; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Johnston-Hollitt, M.; Kapińska, A. D.; McKinley, B.; Offringa, A. R.; Procopio, P.; Staveley-Smith, L.; Wu, C.; Zheng, Q.
2017-11-01
The Murchison Widefield Array (MWA), located in Western Australia, is one of the low-frequency precursors of the international Square Kilometre Array (SKA) project. In addition to pursuing its own ambitious science programme, it is also a testbed for wide range of future SKA activities ranging from hardware, software to data analysis. The key science programmes for the MWA and SKA require very high dynamic ranges, which challenges calibration and imaging systems. Correct calibration of the instrument and accurate measurements of source flux densities and polarisations require precise characterisation of the telescope's primary beam. Recent results from the MWA GaLactic Extragalactic All-sky Murchison Widefield Array (GLEAM) survey show that the previously implemented Average Embedded Element (AEE) model still leaves residual polarisations errors of up to 10-20% in Stokes Q. We present a new simulation-based Full Embedded Element (FEE) model which is the most rigorous realisation yet of the MWA's primary beam model. It enables efficient calculation of the MWA beam response in arbitrary directions without necessity of spatial interpolation. In the new model, every dipole in the MWA tile (4 × 4 bow-tie dipoles) is simulated separately, taking into account all mutual coupling, ground screen, and soil effects, and therefore accounts for the different properties of the individual dipoles within a tile. We have applied the FEE beam model to GLEAM observations at 200-231 MHz and used false Stokes parameter leakage as a metric to compare the models. We have determined that the FEE model reduced the magnitude and declination-dependent behaviour of false polarisation in Stokes Q and V while retaining low levels of false polarisation in Stokes U.
Validation of a C2-C7 cervical spine finite element model using specimen-specific flexibility data.
Kallemeyn, Nicole; Gandhi, Anup; Kode, Swathi; Shivanna, Kiran; Smucker, Joseph; Grosland, Nicole
2010-06-01
This study presents a specimen-specific C2-C7 cervical spine finite element model that was developed using multiblock meshing techniques. The model was validated using in-house experimental flexibility data obtained from the cadaveric specimen used for mesh development. The C2-C7 specimen was subjected to pure continuous moments up to +/-1.0 N m in flexion, extension, lateral bending, and axial rotation, and the motions at each level were obtained. Additionally, the specimen was divided into C2-C3, C4-C5, and C6-C7 functional spinal units (FSUs) which were tested in the intact state as well as after sequential removal of the interspinous, ligamentum flavum, and capsular ligaments. The finite element model was initially assigned baseline material properties based on the literature, but was calibrated using the experimental motion data which was obtained in-house, while utlizing the ranges of material property values as reported in the literature. The calibrated model provided good agreement with the nonlinear experimental loading curves, and can be used to further study the response of the cervical spine to various biomechanical investigations. Copyright 2010 IPEM. Published by Elsevier Ltd. All rights reserved.
Holtschlag, David J.; Koschik, John A.
2002-01-01
The St. Clair–Detroit River Waterway connects Lake Huron with Lake Erie in the Great Lakes basin to form part of the international boundary between the United States and Canada. A two-dimensional hydrodynamic model is developed to compute flow velocities and water levels as part of a source-water assessment of public water intakes. The model, which uses the generalized finite-element code RMA2, discretizes the waterway into a mesh formed by 13,783 quadratic elements defined by 42,936 nodes. Seven steadystate scenarios are used to calibrate the model by adjusting parameters associated with channel roughness in 25 material zones in sub-areas of the waterway. An inverse modeling code is used to systematically adjust model parameters and to determine their associated uncertainty by use of nonlinear regression. Calibration results show close agreement between simulated and expected flows in major channels and water levels at gaging stations. Sensitivity analyses describe the amount of information available to estimate individual model parameters, and quantify the utility of flow measurements at selected cross sections and water-level measurements at gaging stations. Further data collection, model calibration analysis, and grid refinements are planned to assess and enhance two-dimensional flow simulation capabilities describing the horizontal flow distributions in St. Clair and Detroit Rivers and circulation patterns in Lake St. Clair.
NASA Astrophysics Data System (ADS)
Chen, Z.; Jones, C. M.
2002-05-01
Microchemistry of fish otoliths (fish ear bones) is a very useful tool for monitoring aquatic environments and fish migration. However, determination of the elemental composition in fish otolith by ICP-MS has been limited to either analysis of dissolved sample solution or measurement of limited number of trace elements by laser ablation (LA)- ICP-MS due to low sensitivity, lack of available calibration standards, and complexity of polyatomic molecular interference. In this study, a method was developed for in situ determination of trace elements in fish otoliths by laser ablation double focusing sector field ultra high sensitivity Finnigan Element 2 ICP-MS using a solution standard addition calibration method. Due to the lack of matrix-match solid calibration standards, sixteen trace elements (Na, Mg, P, Cr, Mn, Fe, Ni, Cu, Rb, Sr, Y, Cd, La, Ba, Pb and U) were determined using a solution standard calibration with Ca as an internal standard. Flexibility, easy preparation and stable signals are the advantages of using solution calibration standards. In order to resolve polyatomic molecular interferences, medium resolution (M/delta M > 4000) was used for some elements (Na, Mg, P, Cr, Mn, Fe, Ni, and Cu). Both external calibration and standard addition quantification strategies are compared and discussed. Precision, accuracy, and limits of detection are presented.
NASA Astrophysics Data System (ADS)
Klostermann, U. K.; Mülders, T.; Schmöller, T.; Lorusso, G. F.; Hendrickx, E.
2010-04-01
In this paper, we discuss the performance of EUV resist models in terms of predictive accuracy, and we assess the readiness of the corresponding model calibration methodology. The study is done on an extensive OPC data set collected at IMEC for the ShinEtsu resist SEVR-59 on the ASML EUV Alpha Demo Tool (ADT), with the data set including more than thousand CD values. We address practical aspects such as the speed of calibration and selection of calibration patterns. The model is calibrated on 12 process window data series varying in pattern width (32, 36, 40 nm), orientation (H, V) and pitch (dense, isolated). The minimum measured feature size at nominal process condition is a 32 nm CD at a dense pitch of 64 nm. Mask metrology is applied to verify and eventually correct nominal width of the drawn CD. Cross-sectional SEM information is included in the calibration to tune the simulated resist loss and sidewall angle. The achieved calibration RMS is ~ 1.0 nm. We show what elements are important to obtain a well calibrated model. We discuss the impact of 3D mask effects on the Bossung tilt. We demonstrate that a correct representation of the flare level during the calibration is important to achieve a high predictability at various flare conditions. Although the model calibration is performed on a limited subset of the measurement data (one dimensional structures only), its accuracy is validated based on a large number of OPC patterns (at nominal dose and focus conditions) not included in the calibration; validation RMS results as small as 1 nm can be reached. Furthermore, we study the model's extendibility to two-dimensional end of line (EOL) structures. Finally, we correlate the experimentally observed fingerprint of the CD uniformity to a model, where EUV tool specific signatures are taken into account.
NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R825792)
General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...
NON-SPATIAL CALIBRATIONS OF A GENERAL UNIT MODEL FOR ECOSYSTEM SIMULATIONS. (R827169)
General Unit Models simulate system interactions aggregated within one spatial unit of resolution. For unit models to be applicable to spatial computer simulations, they must be formulated generally enough to simulate all habitat elements within the landscape. We present the d...
Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm
NASA Astrophysics Data System (ADS)
Gao, X.; Li, M.; Xing, L.; Liu, Y.
2018-04-01
Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
MODIS Solar Calibration Simulation Assisted Refinement
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Xiaoxiong, Xiong; Guenther, Bruce; Barnes, William; Moyer, David; Salomonson, Vincent V.
2004-01-01
A detailed optical radiometric model has been created of the MODIS instruments solar calibration process. This model takes into account the orientation and distance of the spacecraft with respect to the sun, the correlated motions of the scan mirror and the sun, all of the optical elements, the detector locations on the visible and near IR focal planes, the solar diffuser and the attenuation screen with all of its hundreds of pinholes. An efficient computational scheme, takes into account all of these factors and has produced results which reproduce the observed time dependent intensity variations on the two focal planes with considerable fidelity. This agreement between predictions and observations, has given insight to the causes of some small time dependent variations and how to incorporate them into the overall calibration scheme. The radiometric model is described and modeled and actual measurements are presented and compared.
Technique for Radiometer and Antenna Array Calibration - TRAAC
NASA Technical Reports Server (NTRS)
Meyer, Paul; Sims, William; Varnavas, Kosta; McCracken, Jeff; Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Richeson. James
2012-01-01
Highly sensitive receivers are used to detect minute amounts of emitted electromagnetic energy. Calibration of these receivers is vital to the accuracy of the measurements. Traditional calibration techniques depend on calibration reference internal to the receivers as reference for the calibration of the observed electromagnetic energy. Such methods can only calibrate errors in measurement introduced by the receiver only. The disadvantage of these existing methods is that they cannot account for errors introduced by devices, such as antennas, used for capturing electromagnetic radiation. This severely limits the types of antennas that can be used to make measurements with a high degree of accuracy. Complex antenna systems, such as electronically steerable antennas (also known as phased arrays), while offering potentially significant advantages, suffer from a lack of a reliable and accurate calibration technique. The proximity of antenna elements in an array results in interaction between the electromagnetic fields radiated (or received) by the individual elements. This phenomenon is called mutual coupling. The new calibration method uses a known noise source as a calibration load to determine the instantaneous characteristics of the antenna. The noise source is emitted from one element of the antenna array and received by all the other elements due to mutual coupling. This received noise is used as a calibration standard to monitor the stability of the antenna electronics.
Calibration of discrete element model parameters: soybeans
NASA Astrophysics Data System (ADS)
Ghodki, Bhupendra M.; Patel, Manish; Namdeo, Rohit; Carpenter, Gopal
2018-05-01
Discrete element method (DEM) simulations are broadly used to get an insight of flow characteristics of granular materials in complex particulate systems. DEM input parameters for a model are the critical prerequisite for an efficient simulation. Thus, the present investigation aims to determine DEM input parameters for Hertz-Mindlin model using soybeans as a granular material. To achieve this aim, widely acceptable calibration approach was used having standard box-type apparatus. Further, qualitative and quantitative findings such as particle profile, height of kernels retaining the acrylic wall, and angle of repose of experiments and numerical simulations were compared to get the parameters. The calibrated set of DEM input parameters includes the following (a) material properties: particle geometric mean diameter (6.24 mm); spherical shape; particle density (1220 kg m^{-3} ), and (b) interaction parameters such as particle-particle: coefficient of restitution (0.17); coefficient of static friction (0.26); coefficient of rolling friction (0.08), and particle-wall: coefficient of restitution (0.35); coefficient of static friction (0.30); coefficient of rolling friction (0.08). The results may adequately be used to simulate particle scale mechanics (grain commingling, flow/motion, forces, etc) of soybeans in post-harvest machinery and devices.
Application of the finite element groundwater model FEWA to the engineered test facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, P.M.; Davis, E.C.
1985-09-01
A finite element model for water transport through porous media (FEWA) has been applied to the unconfined aquifer at the Oak Ridge National Laboratory Solid Waste Storage Area 6 Engineered Test Facility (ETF). The model was developed in 1983 as part of the Shallow Land Burial Technology - Humid Task (ONL-WL14) and was previously verified using several general hydrologic problems for which an analytic solution exists. Model application and calibration, as described in this report, consisted of modeling the ETF water table for three specialized cases: a one-dimensional steady-state simulation, a one-dimensional transient simulation, and a two-dimensional transient simulation. Inmore » the one-dimensional steady-state simulation, the FEWA output accurately predicted the water table during a long period in which there were no man-induced or natural perturbations to the system. The input parameters of most importance for this case were hydraulic conductivity and aquifer bottom elevation. In the two transient cases, the FEWA output has matched observed water table responses to a single rainfall event occurring in February 1983, yielding a calibrated finite element model that is useful for further study of additional precipitation events as well as contaminant transport at the experimental site.« less
NASA Astrophysics Data System (ADS)
Jackson-Blake, L.
2014-12-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.
Jaramillo, Hector E; Gómez, Lessby; García, Jose J
2015-01-01
With the aim to study disc degeneration and the risk of injury during occupational activities, a new finite element (FE) model of the L4-L5-S1 segment of the human spine was developed based on the anthropometry of a typical Colombian worker. Beginning with medical images, the programs CATIA and SOLIDWORKS were used to generate and assemble the vertebrae and create the soft structures of the segment. The software ABAQUS was used to run the analyses, which included a detailed model calibration using the experimental step-wise reduction data for the L4-L5 component, while the L5-S1 segment was calibrated in the intact condition. The range of motion curves, the intradiscal pressure and the lateral bulging under pure moments were considered for the calibration. As opposed to other FE models that include the L5-S1 disc, the model developed in this study considered the regional variations and anisotropy of the annulus as well as a realistic description of the nucleus geometry, which allowed an improved representation of experimental data during the validation process. Hence, the model can be used to analyze the stress and strain distributions in the L4-L5 and L5-S1 discs of workers performing activities such as lifting and carrying tasks.
NASA Astrophysics Data System (ADS)
de Lera Acedo, E.; Bolli, P.; Paonessa, F.; Virone, G.; Colin-Beltran, E.; Razavi-Ghods, N.; Aicardi, I.; Lingua, A.; Maschio, P.; Monari, J.; Naldi, G.; Piras, M.; Pupillo, G.
2018-03-01
In this paper we present the electromagnetic modeling and beam pattern measurements of a 16-elements ultra wideband sparse random test array for the low frequency instrument of the Square Kilometer Array telescope. We discuss the importance of a small array test platform for the development of technologies and techniques towards the final telescope, highlighting the most relevant aspects of its design. We also describe the electromagnetic simulations and modeling work as well as the embedded-element and array pattern measurements using an Unmanned Aerial Vehicle system. The latter are helpful both for the validation of the models and the design as well as for the future instrumental calibration of the telescope thanks to the stable, accurate and strong radio frequency signal transmitted by the UAV. At this stage of the design, these measurements have shown a general agreement between experimental results and numerical data and have revealed the localized effect of un-calibrated cable lengths in the inner side-lobes of the array pattern.
NASA Astrophysics Data System (ADS)
Salaris, M.; Cassisi, S.; Schiavon, R. P.; Pietrinferni, A.
2018-04-01
Red giants in the updated APOGEE-Kepler catalogue, with estimates of mass, chemical composition, surface gravity and effective temperature, have recently challenged stellar models computed under the standard assumption of solar calibrated mixing length. In this work, we critically reanalyse this sample of red giants, adopting our own stellar model calculations. Contrary to previous results, we find that the disagreement between the Teff scale of red giants and models with solar calibrated mixing length disappears when considering our models and the APOGEE-Kepler stars with scaled solar metal distribution. However, a discrepancy shows up when α-enhanced stars are included in the sample. We have found that assuming mass, chemical composition and effective temperature scale of the APOGEE-Kepler catalogue, stellar models generally underpredict the change of temperature of red giants caused by α-element enhancements at fixed [Fe/H]. A second important conclusion is that the choice of the outer boundary conditions employed in model calculations is critical. Effective temperature differences (metallicity dependent) between models with solar calibrated mixing length and observations appear for some choices of the boundary conditions, but this is not a general result.
NASA Astrophysics Data System (ADS)
Price, D. C.; Angus, D. A.; Garcia, A.; Fisher, Q. J.; Parsons, S.; Kato, J.
2018-03-01
Time-lapse seismic attributes are used extensively in the history matching of production simulator models. However, although proven to contain information regarding production induced stress change, it is typically only loosely (i.e. qualitatively) used to calibrate geomechanical models. In this study we conduct a multimethod Global Sensitivity Analysis (GSA) to assess the feasibility and aid the quantitative calibration of geomechanical models via near-offset time-lapse seismic data. Specifically, the calibration of mechanical properties of the overburden. Via the GSA, we analyse the near-offset overburden seismic traveltimes from over 4000 perturbations of a Finite Element (FE) geomechanical model of a typical High Pressure High Temperature (HPHT) reservoir in the North Sea. We find that, out of an initially large set of material properties, the near-offset overburden traveltimes are primarily affected by Young's modulus and the effective stress (i.e. Biot) coefficient. The unexpected significance of the Biot coefficient highlights the importance of modelling fluid flow and pore pressure outside of the reservoir. The FE model is complex and highly nonlinear. Multiple combinations of model parameters can yield equally possible model realizations. Consequently, numerical calibration via a large number of random model perturbations is unfeasible. However, the significant differences in traveltime results suggest that more sophisticated calibration methods could potentially be feasible for finding numerous suitable solutions. The results of the time-varying GSA demonstrate how acquiring multiple vintages of time-lapse seismic data can be advantageous. However, they also suggest that significant overburden near-offset seismic time-shifts, useful for model calibration, may take up to 3 yrs after the start of production to manifest. Due to the nonlinearity of the model behaviour, similar uncertainty in the reservoir mechanical properties appears to influence overburden traveltime to a much greater extent. Therefore, reservoir properties must be known to a suitable degree of accuracy before the calibration of the overburden can be considered.
Polarimetry With Phased Array Antennas: Theoretical Framework and Definitions
NASA Astrophysics Data System (ADS)
Warnick, Karl F.; Ivashina, Marianna V.; Wijnholds, Stefan J.; Maaskant, Rob
2012-01-01
For phased array receivers, the accuracy with which the polarization state of a received signal can be measured depends on the antenna configuration, array calibration process, and beamforming algorithms. A signal and noise model for a dual-polarized array is developed and related to standard polarimetric antenna figures of merit, and the ideal polarimetrically calibrated, maximum-sensitivity beamforming solution for a dual-polarized phased array feed is derived. A practical polarimetric beamformer solution that does not require exact knowledge of the array polarimetric response is shown to be equivalent to the optimal solution in the sense that when the practical beamformers are calibrated, the optimal solution is obtained. To provide a rough initial polarimetric calibration for the practical beamformer solution, an approximate single-source polarimetric calibration method is developed. The modeled instrumental polarization error for a dipole phased array feed with the practical beamformer solution and single-source polarimetric calibration was -10 dB or lower over the array field of view for elements with alignments perturbed by random rotations with 5 degree standard deviation.
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.
2014-01-01
The safety-of-flight parameters for the Adaptive Compliant Trailing Edge (ACTE) flap experiment require that flap-to-wing interface loads be sensed and monitored in real time to ensure that the structural load limits of the wing are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces were monitored; each contained four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty-one applied test load cases were developed using the predicted in-flight loads. Pre-test predictions of strain gage responses were produced using finite element method models of the interface fittings. Predicted and measured test strains are presented. A load testing rig and three hydraulic jacks were used to apply combinations of shear, bending, and axial loads to the interface fittings. Hardware deflections under load were measured using photogrammetry and transducers. Due to deflections in the interface fitting hardware and test rig, finite element model techniques were used to calculate the reaction loads throughout the applied load range, taking into account the elastically-deformed geometry. The primary load equations were selected based on multiple calibration metrics. An independent set of validation cases was used to validate each derived equation. The 2-sigma residual errors for the shear loads were less than eight percent of the full-scale calibration load; the 2-sigma residual errors for the bending moment loads were less than three percent of the full-scale calibration load. The derived load equations for shear, bending, and axial loads are presented, with the calculated errors for both the calibration cases and the independent validation load cases.
Clegg, Samuel M.; Wiens, Roger C.; Anderson, Ryan; ...
2016-12-24
The ChemCam Laser-Induced Breakdown Spectroscopy (LIBS) instrument onboard the Mars Science Laboratory (MSL) rover Curiosity has obtained > 300,000 spectra of rock and soil analysis targets since landing at Gale Crater in 2012, and the spectra represent perhaps the largest publicly-available LIBS datasets. The compositions of the major elements, reported as oxides, have been re-calibrated using a laboratory LIBS instrument, Mars-like atmospheric conditions, and a much larger set of standards (408) that span a wider compositional range than previously employed. The new calibration uses a combination of partial least squares (PLS1) and Independent Component Analysis (ICA) algorithms, together with amore » calibration transfer matrix to minimize differences between the conditions under which the standards were analyzed in the laboratory and the conditions on Mars. While the previous model provided good results in the compositional range near the average Mars surface composition, the new model fits the extreme compositions far better. Examples are given for plagioclase feldspars, where silicon was previously significantly over-estimated, and for calcium-sulfate veins, where silicon compositions near zero were inaccurate. Here, the uncertainties of major element abundances are described as a function of the abundances, and are overall significantly lower than the previous model, enabling important new geochemical interpretations of the data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clegg, Samuel M.; Wiens, Roger C.; Anderson, Ryan
The ChemCam Laser-Induced Breakdown Spectroscopy (LIBS) instrument onboard the Mars Science Laboratory (MSL) rover Curiosity has obtained > 300,000 spectra of rock and soil analysis targets since landing at Gale Crater in 2012, and the spectra represent perhaps the largest publicly-available LIBS datasets. The compositions of the major elements, reported as oxides, have been re-calibrated using a laboratory LIBS instrument, Mars-like atmospheric conditions, and a much larger set of standards (408) that span a wider compositional range than previously employed. The new calibration uses a combination of partial least squares (PLS1) and Independent Component Analysis (ICA) algorithms, together with amore » calibration transfer matrix to minimize differences between the conditions under which the standards were analyzed in the laboratory and the conditions on Mars. While the previous model provided good results in the compositional range near the average Mars surface composition, the new model fits the extreme compositions far better. Examples are given for plagioclase feldspars, where silicon was previously significantly over-estimated, and for calcium-sulfate veins, where silicon compositions near zero were inaccurate. Here, the uncertainties of major element abundances are described as a function of the abundances, and are overall significantly lower than the previous model, enabling important new geochemical interpretations of the data.« less
The Importance of Calibration in Clinical Psychology.
Lindhiem, Oliver; Petersen, Isaac T; Mentch, Lucas K; Youngstrom, Eric A
2018-02-01
Accuracy has several elements, not all of which have received equal attention in the field of clinical psychology. Calibration, the degree to which a probabilistic estimate of an event reflects the true underlying probability of the event, has largely been neglected in the field of clinical psychology in favor of other components of accuracy such as discrimination (e.g., sensitivity, specificity, area under the receiver operating characteristic curve). Although it is frequently overlooked, calibration is a critical component of accuracy with particular relevance for prognostic models and risk-assessment tools. With advances in personalized medicine and the increasing use of probabilistic (0% to 100%) estimates and predictions in mental health research, the need for careful attention to calibration has become increasingly important.
Model Calibration Efforts for the International Space Station's Solar Array Mast
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Horta, Lucas G.; Templeton, Justin D.; Knight, Norman F., Jr.
2012-01-01
The International Space Station (ISS) relies on sixteen solar-voltaic blankets to provide electrical power to the station. Each pair of blankets is supported by a deployable boom called the Folding Articulated Square Truss Mast (FAST Mast). At certain ISS attitudes, the solar arrays can be positioned in such a way that shadowing of either one or three longerons causes an unexpected asymmetric thermal loading that if unchecked can exceed the operational stability limits of the mast. Work in this paper documents part of an independent NASA Engineering and Safety Center effort to assess the existing operational limits. Because of the complexity of the system, the problem is being worked using a building-block progression from components (longerons), to units (single or multiple bays), to assembly (full mast). The paper presents results from efforts to calibrate the longeron components. The work includes experimental testing of two types of longerons (straight and tapered), development of Finite Element (FE) models, development of parameter uncertainty models, and the establishment of a calibration and validation process to demonstrate adequacy of the models. Models in the context of this paper refer to both FE model and probabilistic parameter models. Results from model calibration of the straight longerons show that the model is capable of predicting the mean load, axial strain, and bending strain. For validation, parameter values obtained from calibration of straight longerons are used to validate experimental results for the tapered longerons.
Development of a quality assurance program for ionizing radiation secondary calibration laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heaton, H.T. II; Taylor, A.R. Jr.
For calibration laboratories, routine calibrations of instruments meeting stated accuracy goals are important. One method of achieving the accuracy goals is to establish and follow a quality assurance program designed to monitor all aspects of the calibration program and to provide the appropriate feedback mechanism if adjustments are needed. In the United States there are a number of organizations with laboratory accreditation programs. All existing accreditation programs require that the laboratory implement a quality assurance program with essentially the same elements in all of these programs. Collectively, these elements have been designated as a Measurement Quality Assurance (MQA) program. Thismore » paper will briefly discuss the interrelationship of the elements of an MQA program. Using the Center for Devices and Radiological Health (CDRH) X-ray Calibration Laboratory (XCL) as an example, it will focus on setting up a quality control program for the equipment in a Secondary Calibration Laboratory.« less
PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils
NASA Technical Reports Server (NTRS)
Johnson, Scott; Walton, Otis; Settgast, Randolph
2013-01-01
PowderSim is a calculation tool that combines a discrete-element method (DEM) module, including calibrated interparticle-interaction relationships, with a mesh-free, continuum, SPH (smoothed-particle hydrodynamics) based module that utilizes enhanced, calibrated, constitutive models capable of mimicking both large deformations and the flow behavior of regolith simulants and lunar regolith under conditions anticipated during in situ resource utilization (ISRU) operations. The major innovation introduced in PowderSim is to use a mesh-free method (SPH-based) with a calibrated and slightly modified critical-state soil mechanics constitutive model to extend the ability of the simulation tool to also address full-scale engineering systems in the continuum sense. The PowderSim software maintains the ability to address particle-scale problems, like size segregation, in selected regions with a traditional DEM module, which has improved contact physics and electrostatic interaction models.
Modeling Adhesive Anchors in a Discrete Element Framework
Marcon, Marco; Vorel, Jan; Ninčević, Krešimir; Wan-Wendner, Roman
2017-01-01
In recent years, post-installed anchors are widely used to connect structural members and to fix appliances to load-bearing elements. A bonded anchor typically denotes a threaded bar placed into a borehole filled with adhesive mortar. The high complexity of the problem, owing to the multiple materials and failure mechanisms involved, requires a numerical support for the experimental investigation. A reliable model able to reproduce a system’s short-term behavior is needed before the development of a more complex framework for the subsequent investigation of the lifetime of fasteners subjected to various deterioration processes can commence. The focus of this contribution is the development and validation of such a model for bonded anchors under pure tension load. Compression, modulus, fracture and splitting tests are performed on standard concrete specimens. These serve for the calibration and validation of the concrete constitutive model. The behavior of the adhesive mortar layer is modeled with a stress-slip law, calibrated on a set of confined pull-out tests. The model validation is performed on tests with different configurations comparing load-displacement curves, crack patterns and concrete cone shapes. A model sensitivity analysis and the evaluation of the bond stress and slippage along the anchor complete the study. PMID:28786964
NASA Astrophysics Data System (ADS)
Matiatos, Ioannis; Varouhakis, Emmanouil A.; Papadopoulou, Maria P.
2015-04-01
As the sustainable use of groundwater resources is a great challenge for many countries in the world, groundwater modeling has become a very useful and well established tool for studying groundwater management problems. Based on various methods used to numerically solve algebraic equations representing groundwater flow and contaminant mass transport, numerical models are mainly divided into Finite Difference-based and Finite Element-based models. The present study aims at evaluating the performance of a finite difference-based (MODFLOW-MT3DMS), a finite element-based (FEFLOW) and a hybrid finite element and finite difference (Princeton Transport Code-PTC) groundwater numerical models simulating groundwater flow and nitrate mass transport in the alluvial aquifer of Trizina region in NE Peloponnese, Greece. The calibration of groundwater flow in all models was performed using groundwater hydraulic head data from seven stress periods and the validation was based on a series of hydraulic head data for two stress periods in sufficient numbers of observation locations. The same periods were used for the calibration of nitrate mass transport. The calibration and validation of the three models revealed that the simulated values of hydraulic heads and nitrate mass concentrations coincide well with the observed ones. The models' performance was assessed by performing a statistical analysis of these different types of numerical algorithms. A number of metrics, such as Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Bias, Nash Sutcliffe Model Efficiency (NSE) and Reliability Index (RI) were used allowing the direct comparison of models' performance. Spatiotemporal Kriging (STRK) was also applied using separable and non-separable spatiotemporal variograms to predict water table level and nitrate concentration at each sampling station for two selected hydrological stress periods. The predictions were validated using the respective measured values. Maps of water table level and nitrate concentrations were produced and compared with those obtained from groundwater and mass transport numerical models. Preliminary results showed similar efficiency of the spatiotemporal geostatistical method with the numerical models. However data requirements of the former model were significantly less. Advantages and disadvantages of the methods performance were analysed and discussed indicating the characteristics of the different approaches.
Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z
2013-11-25
The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.
NASA Technical Reports Server (NTRS)
Walker, K. P.
1981-01-01
Results of a 20-month research and development program for nonlinear structural modeling with advanced time-temperature constitutive relationships are reported. The program included: (1) the evaluation of a number of viscoplastic constitutive models in the published literature; (2) incorporation of three of the most appropriate constitutive models into the MARC nonlinear finite element program; (3) calibration of the three constitutive models against experimental data using Hastelloy-X material; and (4) application of the most appropriate constitutive model to a three dimensional finite element analysis of a cylindrical combustor liner louver test specimen to establish the capability of the viscoplastic model to predict component structural response.
An inversion-based self-calibration for SIMS measurements: Application to H, F, and Cl in apatite
NASA Astrophysics Data System (ADS)
Boyce, J. W.; Eiler, J. M.
2011-12-01
Measurements of volatile abundances in igneous apatites can provide information regarding the abundances and evolution of volatiles in magmas, with applications to terrestrial volcanism and planetary evolution. Secondary ion mass spectrometry (SIMS) measurements can produce accurate and precise measurements of H and other volatiles in many materials including apatite. SIMS standardization generally makes use of empirical linear transfer functions that relate measured ion ratios to independently known concentrations. However, this approach is often limited by the lack of compositionally diverse, well-characterized, homogeneous standards. In general, SIMS calibrations are developed for minor and trace elements, and any two are treated as independent of one another. However, in crystalline materials, additional stoichiometric constraints may apply. In the case of apatite, the sum of concentrations of abundant volatile elements (H, Cl, and F) should closely approach 100% occupancy of their collective structural site. Here we propose and document the efficacy of a method for standardizing SIMS analyses of abundant volatiles in apatites that takes advantage of this stoichiometric constraint. The principle advantage of this method is that it is effectively self-standardizing; i.e., it requires no independently known homogeneous reference standards. We define a system of independent linear equations relating measured ion ratios (H/P, Cl/P, F/P) and unknown calibration slopes. Given sufficient range in the concentrations of the different elements among apatites measured in a single analytical session, solving this system of equations allows for the calibration slope for each element to be determined without standards, using only blank-corrected ion ratios. In the case that a data set of this kind lacks sufficient range in measured compositions of one or more of the relevant ion ratios, one can employ measurements of additional apatites of a variety of compositions to increase the statistical range and make the inversion more accurate and precise. These additional non-standard apatites need only be wide-ranging in composition: They need not be homogenous nor have known H, F, or Cl concentrations. Tests utilizing synthetic data and data generated in the laboratory indicate that this method should yield satisfactory results provided apatites meet the criteria of the model. The inversion method is able to reproduce conventional calibrations to within <2.5%, a level of accuracy comparable to or even better than the uncertainty of the conventional calibration, and one that includes both error in the inversion method as well as any true error in the independently determined values of the standards. Uncertainties in the inversion calibrations range from 0.1-1.7% (2σ), typically an order of magnitude smaller than the uncertainties in conventional calibrations (~4-5% for H2O, 1-19% for F and Cl). However, potential systematic errors stem from the model assumption of 100% occupancy of this site by the measured elements. Use of this method simplifies analysis of H, F, and Cl in apatites by SIMS, and may also be amenable to other stoichiometrically limited substitution groups, including P+As+S+Si+C in apatite, and Zr+Hf+U+Th in non-metamict zircon.
Robust camera calibration for sport videos using court models
NASA Astrophysics Data System (ADS)
Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang
2003-12-01
We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.
Finite element code development for modeling detonation of HMX composites
NASA Astrophysics Data System (ADS)
Duran, Adam V.; Sundararaghavan, Veera
2017-01-01
In this work, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for SOD shock and ZND strong detonation models. Benchmark problems are presented for geometries in which a single HMX crystal is subjected to a shock condition.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
Calibration of AXAF Mirrors Using Synchrotron Radiation
NASA Astrophysics Data System (ADS)
Graessle, D. E.; Fitch, J.; Harris, B.; Hsieh, P.; Nguyen, D.; Hughes, J.; Schwartz, D.; Blake, R.
1995-12-01
Over the past five years, the SAO AXAF Mission Support Team has been developing methods and systems to provide a tunable, narrow-energy-bandwidth calibration of the reflecting efficiency of the AXAF High Resolution Mirror Assembly. A group of synchrotron beamlines at the National Synchrotron Light Source was selected for this calibration. Measurements and analysis are now available for the 2-12 keV energy range. An X-ray beam with energy purity E/Delta E ~ 5000 has been used to calibrate several witness flats which were coated simultaneously with elements of the flight mirror. In the iridium-edge range, (2010-3200 eV), these may be the first measurements ever to be reported. Optical constants for the iridium have been derived from a fit of reflectance versus grazing angle to a Fresnel equation model for the 2-12 keV energy range. The eight AXAF HRMA elements are being coated individually; however reflectance results are quite consistent from coating run to coating run for the first few pieces. The measurement precision is approximately 0.2%-0.4%. Residuals of the fit are nearly always within 1.0% of the data values, in the angle ranges of interest to AXAF.
Microfabricated field calibration assembly for analytical instruments
Robinson, Alex L [Albuquerque, NM; Manginell, Ronald P [Albuquerque, NM; Moorman, Matthew W [Albuquerque, NM; Rodacy, Philip J [Albuquerque, NM; Simonson, Robert J [Cedar Crest, NM
2011-03-29
A microfabricated field calibration assembly for use in calibrating analytical instruments and sensor systems. The assembly comprises a circuit board comprising one or more resistively heatable microbridge elements, an interface device that enables addressable heating of the microbridge elements, and, in some embodiments, a means for positioning the circuit board within an inlet structure of an analytical instrument or sensor system.
Clusters of Monoisotopic Elements for Calibration in (TOF) Mass Spectrometry
NASA Astrophysics Data System (ADS)
Kolářová, Lenka; Prokeš, Lubomír; Kučera, Lukáš; Hampl, Aleš; Peňa-Méndez, Eladia; Vaňhara, Petr; Havel, Josef
2017-03-01
Precise calibration in TOF MS requires suitable and reliable standards, which are not always available for high masses. We evaluated inorganic clusters of the monoisotopic elements gold and phosphorus (Au n +/Au n - and P n +/P n -) as an alternative to peptides or proteins for the external and internal calibration of mass spectra in various experimental and instrumental scenarios. Monoisotopic gold or phosphorus clusters can be easily generated in situ from suitable precursors by laser desorption/ionization (LDI) or matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). Their use offers numerous advantages, including simplicity of preparation, biological inertness, and exact mass determination even at lower mass resolution. We used citrate-stabilized gold nanoparticles to generate gold calibration clusters, and red phosphorus powder to generate phosphorus clusters. Both elements can be added to samples to perform internal calibration up to mass-to-charge ( m/z) 10-15,000 without significantly interfering with the analyte. We demonstrated the use of the gold and phosphorous clusters in the MS analysis of complex biological samples, including microbial standards and total extracts of mouse embryonic fibroblasts. We believe that clusters of monoisotopic elements could be used as generally applicable calibrants for complex biological samples.
NASA Astrophysics Data System (ADS)
Campbell, J. L.; Lee, M.; Jones, B. N.; Andrushenko, S. M.; Holmes, N. G.; Maxwell, J. A.; Taylor, S. M.
2009-04-01
The detection sensitivities of the Alpha Particle X-ray Spectrometer (APXS) instruments on the Mars Exploration Rovers for a wide range of elements were experimentally determined in 2002 using spectra of geochemical reference materials. A flight spare instrument was similarly calibrated, and the calibration exercise was then continued for this unit with an extended set of geochemical reference materials together with pure elements and simple chemical compounds. The flight spare instrument data are examined in detail here using a newly developed fundamental parameters approach which takes precise account of all the physics inherent in the two X-ray generation techniques involved, namely, X-ray fluorescence and particle-induced X-ray emission. The objectives are to characterize the instrument as fully as possible, to test this new approach, and to determine the accuracy of calibration for major, minor, and trace elements. For some of the lightest elements the resulting calibration exhibits a dependence upon the mineral assemblage of the geological reference material; explanations are suggested for these observations. The results will assist in designing the overall calibration approach for the APXS on the Mars Science Laboratory mission.
Masum, M A; Pickering, M R; Lambert, A J; Scarvell, J M; Smith, P N
2017-09-06
In this paper, a novel multi-slice ultrasound (US) image calibration of an intelligent skin-marker used for soft tissue artefact compensation is proposed to align and orient image slices in an exact H-shaped pattern. Multi-slice calibration is complex, however, in the proposed method, a phantom based visual alignment followed by transform parameters estimation greatly reduces the complexity and provides sufficient accuracy. In this approach, the Hough Transform (HT) is used to further enhance the image features which originate from the image feature enhancing elements integrated into the physical phantom model, thus reducing feature detection uncertainty. In this framework, slice by slice image alignment and calibration are carried out and this provides manual ease and convenience. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uncertainty of climate change impact on groundwater reserves - Application to a chalk aquifer
NASA Astrophysics Data System (ADS)
Goderniaux, Pascal; Brouyère, Serge; Wildemeersch, Samuel; Therrien, René; Dassargues, Alain
2015-09-01
Recent studies have evaluated the impact of climate change on groundwater resources for different geographical and climatic contexts. However, most studies have either not estimated the uncertainty around projected impacts or have limited the analysis to the uncertainty related to climate models. In this study, the uncertainties around impact projections from several sources (climate models, natural variability of the weather, hydrological model calibration) are calculated and compared for the Geer catchment (465 km2) in Belgium. We use a surface-subsurface integrated model implemented using the finite element code HydroGeoSphere, coupled with climate change scenarios (2010-2085) and the UCODE_2005 inverse model, to assess the uncertainty related to the calibration of the hydrological model. This integrated model provides a more realistic representation of the water exchanges between surface and subsurface domains and constrains more the calibration with the use of both surface and subsurface observed data. Sensitivity and uncertainty analyses were performed on predictions. The linear uncertainty analysis is approximate for this nonlinear system, but it provides some measure of uncertainty for computationally demanding models. Results show that, for the Geer catchment, the most important uncertainty is related to calibration of the hydrological model. The total uncertainty associated with the prediction of groundwater levels remains large. By the end of the century, however, the uncertainty becomes smaller than the predicted decline in groundwater levels.
Kasmarek, Mark C.; Robinson, James L.
2004-01-01
As a part of the Texas Water Development Board Ground- Water Availability Modeling program, the U.S. Geological Survey developed and tested a numerical finite-difference (MODFLOW) model to simulate ground-water flow and land-surface subsidence in the northern part of the Gulf Coast aquifer system in Texas from predevelopment (before 1891) through 2000. The model is intended to be a tool that water-resource managers can use to address future ground-water-availability issues.From land surface downward, the Chicot aquifer, the Evangeline aquifer, the Burkeville confining unit, the Jasper aquifer, and the Catahoula confining unit are the hydrogeologic units of the Gulf Coast aquifer system. Withdrawals of large quantities of ground water have resulted in potentiometric surface (head) declines in the Chicot, Evangeline, and Jasper aquifers and land-surface subsidence (primarily in the Houston area) from depressurization and compaction of clay layers interbedded in the aquifer sediments. In a generalized conceptual model of the aquifer system, water enters the ground-waterflow system in topographically high outcrops of the hydrogeologic units in the northwestern part of the approximately 25,000-square-mile model area. Water that does not discharge to streams flows to intermediate and deep zones of the system southeastward of the outcrop areas where it is discharged by wells and by upward leakage in topographically low areas near the coast. The uppermost parts of the aquifer system, which include outcrop areas, are under water-table conditions. As depth increases in the aquifer system and as interbedded sand and clay accumulate, water-table conditions evolve into confined conditions.The model comprises four layers, one for each of the hydrogeologic units of the aquifer system except the Catahoula confining unit, the assumed no-flow base of the system. Each layer consists of 137 rows and 245 columns of uniformly spaced grid blocks, each block representing 1 square mile. Lateral no-flow boundaries were located on the basis of outcrop extent (northwestern), major streams (southwestern, northeastern), and downdip limit of freshwater (southeastern). The MODFLOW general-head boundary package was used to simulate recharge and discharge in the outcrops of the hydrogeologic units. Simulation of land-surface subsidence (actually, compaction of clays) and release of water from storage in the clays of the Chicot and Evangeline aquifers was accomplished using the Interbed-Storage Package designed for use with the MODFLOW model. The model was calibrated by trial-anderror adjustment of selected model input data in a series of transient simulations until the model output (potentiometric surfaces, land-surface subsidence, and selected water-budget components) reasonably reproduced field measured (or estimated) aquifer responses.Model calibration comprised four elements: The first was qualitative comparison of simulated and measured heads in the aquifers for 1977 and 2000; and quantitative comparison by computation and areal distribution of the root-mean-square error between simulated and measured heads. The second calibration element was comparison of simulated and measured hydrographs from wells in the aquifers in a number of counties throughout the modeled area. The third calibration element was comparison of simulated water-budget componentsprimarily recharge and dischargeto estimates of physically reasonable ranges of actual water-budget components. The fourth calibration element was comparison of simulated land-surface subsidence from predevelopment to 2000 to measured land surface subsidence from 1906 through 1995.
Glass microneedles for force measurements: a finite-element analysis model
Ayittey, Peter N.; Walker, John S.; Rice, Jeremy J.; de Tombe, Pieter P.
2010-01-01
Changes in developed force (0.1–3.0 μN) observed during contraction of single myofibrils in response to rapidly changing calcium concentrations can be measured using glass microneedles. These microneedles are calibrated for stiffness and deflect on response to developed myofibril force. The precision and accuracy of kinetic measurements are highly dependent on the structural and mechanical characteristics of the microneedles, which are generally assumed to have a linear force–deflection relationship. We present a finite-element analysis (FEA) model used to simulate the effects of measurable geometry on stiffness as a function of applied force and validate our model with actual measured needle properties. In addition, we developed a simple heuristic constitutive equation that best describes the stiffness of our range of microneedles used and define limits of geometry parameters within which our predictions hold true. Our model also maps a relation between the geometry parameters and natural frequencies in air, enabling optimum parametric combinations for microneedle fabrication that would reflect more reliable force measurement in fluids and physiological environments. We propose a use for this model to aid in the design of microneedles to improve calibration time, reproducibility, and precision for measuring myofibrillar, cellular, and supramolecular kinetic forces. PMID:19104827
Clegg, Samuel M.; Wiens, Roger C.; Anderson, Ryan; Forni, Olivier; Frydenvang, Jens; Lasue, Jeremie; Cousin, Agnes; Payre, Valerie; Boucher, Tommy; Dyar, M. Darby; McLennan, Scott M.; Morris, Richard V.; Graff, Trevor G.; Mertzman, Stanley A; Ehlmann, Bethany L.; Belgacem, Ines; Newsom, Horton E.; Clark, Ben C.; Melikechi, Noureddine; Mezzacappa, Alissa; McInroy, Rhonda E.; Martinez, Ronald; Gasda, Patrick J.; Gasnault, Olivier; Maurice, Sylvestre
2017-01-01
The ChemCam Laser-Induced Breakdown Spectroscopy (LIBS) instrument onboard the Mars Science Laboratory (MSL) rover Curiosity has obtained > 300,000 spectra of rock and soil analysis targets since landing at Gale Crater in 2012, and the spectra represent perhaps the largest publicly-available LIBS datasets. The compositions of the major elements, reported as oxides (SiO2, TiO2, Al2O3, FeOT, MgO, CaO, Na2O, K2O), have been re-calibrated using a laboratory LIBS instrument, Mars-like atmospheric conditions, and a much larger set of standards (408) that span a wider compositional range than previously employed. The new calibration uses a combination of partial least squares (PLS1) and Independent Component Analysis (ICA) algorithms, together with a calibration transfer matrix to minimize differences between the conditions under which the standards were analyzed in the laboratory and the conditions on Mars. While the previous model provided good results in the compositional range near the average Mars surface composition, the new model fits the extreme compositions far better. Examples are given for plagioclase feldspars, where silicon was significantly over-estimated by the previous model, and for calcium-sulfate veins, where silicon compositions near zero were inaccurate. The uncertainties of major element abundances are described as a function of the abundances, and are overall significantly lower than the previous model, enabling important new geochemical interpretations of the data.
Crossing the barrier between the laboratory working model and the practicable production model
NASA Astrophysics Data System (ADS)
Curby, William A.
1992-12-01
Transforming apparatus that has developed into a successfully working laboratory system into a system that is ready, or nearly ready for production, distribution and general use is not always accomplished in a cost effective or timely fashion. Several design elements must be considered interactively during the planning, construction, use and servicing of the final production form of the system. The basic design elements are: Operating Specifications, Reliability Factors, Safety Factors, Precision Limits, Accuracy Limits, Uniformity Factors, Cost Limits and Calibration Requirements. Secondary elements including: Human Engineering, Documentation, Training, Maintenance, Proprietary Rights, Protection, Marketing, Replacement of Parts, and Packing and Shipping must also be considered during the transition.
NASA Astrophysics Data System (ADS)
Anderson, R. B.; Clegg, S. M.; Frydenvang, J.
2015-12-01
One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.
Fatigue assessment of an existing steel bridge by finite element modelling and field measurements
NASA Astrophysics Data System (ADS)
Kwad, J.; Alencar, G.; Correia, J.; Jesus, A.; Calçada, R.; Kripakaran, P.
2017-05-01
The evaluation of fatigue life of structural details in metallic bridges is a major challenge for bridge engineers. A reliable and cost-effective approach is essential to ensure appropriate maintenance and management of these structures. Typically, local stresses predicted by a finite element model of the bridge are employed to assess the fatigue life of fatigue-prone details. This paper illustrates an approach for fatigue assessment based on measured data for a connection in an old bascule steel bridge located in Exeter (UK). A finite element model is first developed from the design information. The finite element model of the bridge is calibrated using measured responses from an ambient vibration test. The stress time histories are calculated through dynamic analysis of the updated finite element model. Stress cycles are computed through the rainflow counting algorithm, and the fatigue prone details are evaluated using the standard SN curves approach and the Miner’s rule. Results show that the proposed approach can estimate the fatigue damage of a fatigue prone detail in a structure using measured strain data.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
NASA Astrophysics Data System (ADS)
Seiller, G.; Roy, R.; Anctil, F.
2017-04-01
Uncertainties associated to the evaluation of the impacts of climate change on water resources are broad, from multiple sources, and lead to diagnoses sometimes difficult to interpret. Quantification of these uncertainties is a key element to yield confidence in the analyses and to provide water managers with valuable information. This work specifically evaluates the influence of hydrological modeling calibration metrics on future water resources projections, on thirty-seven watersheds in the Province of Québec, Canada. Twelve lumped hydrologic models, representing a wide range of operational options, are calibrated with three common objective functions derived from the Nash-Sutcliffe efficiency. The hydrologic models are forced with climate simulations corresponding to two RCP, twenty-nine GCM from CMIP5 (Coupled Model Intercomparison Project phase 5) and two post-treatment techniques, leading to future projections in the 2041-2070 period. Results show that the diagnosis of the impacts of climate change on water resources are quite affected by the hydrologic models selection and calibration metrics. Indeed, for the four selected hydrological indicators, dedicated to water management, parameters from the three objective functions can provide different interpretations in terms of absolute and relative changes, as well as projected changes direction and climatic ensemble consensus. The GR4J model and a multimodel approach offer the best modeling options, based on calibration performance and robustness. Overall, these results illustrate the need to provide water managers with detailed information on relative changes analysis, but also absolute change values, especially for hydrological indicators acting as security policy thresholds.
Rosenberger, Matthew R; Chen, Sihan; Prater, Craig B; King, William P
2017-01-27
This paper reports the design, fabrication, and characterization of micromechanical devices that can present an engineered contact stiffness to an atomic force microscope (AFM) cantilever tip. These devices allow the contact stiffness between the AFM tip and a substrate to be easily and accurately measured, and can be used to calibrate the cantilever for subsequent mechanical property measurements. The contact stiffness devices are rigid copper disks of diameters 2-18 μm integrated onto a soft silicone substrate. Analytical modeling and finite element simulations predict the elastic response of the devices. Measurements of tip-sample interactions during quasi-static force measurements compare well with modeling simulation, confirming the expected elastic response of the devices, which are shown to have contact stiffness 32-156 N m -1 . To demonstrate one application, we use the disk sample to calibrate three resonant modes of a U-shaped AFM cantilever actuated via Lorentz force, at approximately 220, 450, and 1200 kHz. We then use the calibrated cantilever to determine the contact stiffness and elastic modulus of three polymer samples at these modes. The overall approach allows cantilever calibration without prior knowledge of the cantilever geometry or its resonance modes, and could be broadly applied to both static and dynamic measurements that require AFM calibration against a known contact stiffness.
NASA Astrophysics Data System (ADS)
Rosenberger, Matthew R.; Chen, Sihan; Prater, Craig B.; King, William P.
2017-01-01
This paper reports the design, fabrication, and characterization of micromechanical devices that can present an engineered contact stiffness to an atomic force microscope (AFM) cantilever tip. These devices allow the contact stiffness between the AFM tip and a substrate to be easily and accurately measured, and can be used to calibrate the cantilever for subsequent mechanical property measurements. The contact stiffness devices are rigid copper disks of diameters 2-18 μm integrated onto a soft silicone substrate. Analytical modeling and finite element simulations predict the elastic response of the devices. Measurements of tip-sample interactions during quasi-static force measurements compare well with modeling simulation, confirming the expected elastic response of the devices, which are shown to have contact stiffness 32-156 N m-1. To demonstrate one application, we use the disk sample to calibrate three resonant modes of a U-shaped AFM cantilever actuated via Lorentz force, at approximately 220, 450, and 1200 kHz. We then use the calibrated cantilever to determine the contact stiffness and elastic modulus of three polymer samples at these modes. The overall approach allows cantilever calibration without prior knowledge of the cantilever geometry or its resonance modes, and could be broadly applied to both static and dynamic measurements that require AFM calibration against a known contact stiffness.
Eggermont, Florieke; Derikx, Loes C; Free, Jeffrey; van Leeuwen, Ruud; van der Linden, Yvette M; Verdonschot, Nico; Tanck, Esther
2018-03-06
In a multi-center patient study, using different CT scanners, CT-based finite element (FE) models are utilized to calculate failure loads of femora with metastases. Previous studies showed that using different CT scanners can result in different outcomes. This study aims to quantify the effects of (i) different CT scanners; (ii) different CT protocols with variations in slice thickness, field of view (FOV), and reconstruction kernel; and (iii) air between calibration phantom and patient, on Hounsfield Units (HU), bone mineral density (BMD), and FE failure load. Six cadaveric femora were scanned on four CT scanners. Scans were made with multiple CT protocols and with or without an air gap between the body model and calibration phantom. HU and calibrated BMD were determined in cortical and trabecular regions of interest. Non-linear isotropic FE models were constructed to calculate failure load. Mean differences between CT scanners varied up to 7% in cortical HU, 6% in trabecular HU, 6% in cortical BMD, 12% in trabecular BMD, and 17% in failure load. Changes in slice thickness and FOV had little effect (≤4%), while reconstruction kernels had a larger effect on HU (16%), BMD (17%), and failure load (9%). Air between the body model and calibration phantom slightly decreased the HU, BMD, and failure loads (≤8%). In conclusion, this study showed that quantitative analysis of CT images acquired with different CT scanners, and particularly reconstruction kernels, can induce relatively large differences in HU, BMD, and failure loads. Additionally, if possible, air artifacts should be avoided. © 2018 Orthopaedic Research Society. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society. J Orthop Res. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society.
GTE blade injection moulding modeling and verification of models during process approbation
NASA Astrophysics Data System (ADS)
Stepanenko, I. S.; Khaimovich, A. I.
2017-02-01
The simulation model for filling the mould was developed using Moldex3D, and it was experimentally verified in order to perform further optimization calculations of the moulding process conditions. The method described in the article allows adjusting the finite-element model by minimizing the airfoil profile difference between the design and experimental melt motion front due to the differentiated change of power supplied to heating elements, which heat the injection mould in simulation. As a result of calibrating the injection mould for the gas-turbine engine blade, the mean difference between the design melt motion profile and the experimental airfoil profile of no more than 4% was achieved.
NASA Astrophysics Data System (ADS)
Qian, Guian; Lei, Wei-Sheng; Niffenegger, M.; González-Albuixech, V. F.
2018-04-01
The work relates to the effect of temperature on the model parameters in local approaches (LAs) to cleavage fracture. According to a recently developed LA model, the physical consensus of plastic deformation being a prerequisite to cleavage fracture enforces any LA model of cleavage fracture to observe initial yielding of a volume element as its threshold stress state to incur cleavage fracture in addition to the conventional practice of confining the fracture process zone within the plastic deformation zone. The physical consistency of the new LA model to the basic LA methodology and the differences between the new LA model and other existing models are interpreted. Then this new LA model is adopted to investigate the temperature dependence of LA model parameters using circumferentially notched round tensile specimens. With the published strength data as input, finite element (FE) calculation is conducted for elastic-perfectly plastic deformation and the realistic elastic-plastic hardening, respectively, to provide stress distributions for model calibration. The calibration results in temperature independent model parameters. This leads to the establishment of a 'master curve' characteristic to synchronise the correlation between the nominal strength and the corresponding cleavage fracture probability at different temperatures. This 'master curve' behaviour is verified by strength data from three different steels, providing a new path to calculate cleavage fracture probability with significantly reduced FE efforts.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B.; McClain, Charles R.; Mannino, Antonio
2007-01-01
The primary objective of this planning document is to establish a long-term capability and validating oceanic biogeochemical satellite data. It is a pragmatic solution to a practical problem based primarily o the lessons learned from prior satellite missions. All of the plan's elements are seen to be interdependent, so a horizontal organizational scheme is anticipated wherein the overall leadership comes from the NASA Ocean Biology and Biogeochemistry (OBB) Program Manager and the entire enterprise is split into two components of equal sature: calibration and validation plus satellite data processing. The detailed elements of the activity are based on the basic tasks of the two main components plus the current objectives of the Carbon Cycle and Ecosystems Roadmap. The former is distinguished by an internal core set of responsibilities and the latter is facilitated through an external connecting-core ring of competed or contracted activities. The core elements for the calibration and validation component include a) publish protocols and performance metrics; b) verify uncertainty budgets; c) manage the development and evaluation of instrumentation; and d) coordinate international partnerships. The core elements for the satellite data processing component are e) process and reprocess multisensor data; f) acquire, distribute, and archive data products; and g) implement new data products. Both components have shared responsibilities for initializing and temporally monitoring satellite calibration. Connecting-core elements include (but are not restricted to) atmospheric correction and characterization, standards and traceability, instrument and analysis round robins, field campaigns and vicarious calibration sites, in situ database, bio-optical algorithm (and product) validation, satellite characterization and vicarious calibration, and image processing software. The plan also includes an accountability process, creating a Calibration and Validation Team (to help manage the activity), and a discussion of issues associated with the plan's scientific focus.
Development of Design Analysis Methods for C/SiC Composite Structures
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.; Mital, Subodh K.; Murthy, Pappu L. N.; Palko, Joseph L.; Cueno, Jacques C.; Koenig, John R.
2006-01-01
The stress-strain behavior at room temperature and at 1100 C (2000 F) was measured for two carbon-fiber-reinforced silicon carbide (C/SiC) composite materials: a two-dimensional plain-weave quasi-isotropic laminate and a three-dimensional angle-interlock woven composite. Micromechanics-based material models were developed for predicting the response properties of these two materials. The micromechanics based material models were calibrated by correlating the predicted material property values with the measured values. Four-point beam bending sub-element specimens were fabricated with these two fiber architectures and four-point bending tests were performed at room temperature and at 1100 C. Displacements and strains were measured at various locations along the beam and recorded as a function of load magnitude. The calibrated material models were used in concert with a nonlinear finite element solution to simulate the structural response of these two materials in the four-point beam bending tests. The structural response predicted by the nonlinear analysis method compares favorably with the measured response for both materials and for both test temperatures. Results show that the material models scale up fairly well from coupon to subcomponent level.
Behavior of Industrial Steel Rack Connections
NASA Astrophysics Data System (ADS)
Shah, S. N. R.; Ramli Sulong, N. H.; Khan, R.; Jumaat, M. Z.; Shariati, M.
2016-03-01
Beam-to-column connections (BCCs) used in steel pallet racks (SPRs) play a significant role to maintain the stability of rack structures in the down-aisle direction. The variety in the geometry of commercially available beam end connectors hampers the development of a generalized analytic design approach for SPR BCCs. The experimental prediction of flexibility in SPR BCCs is prohibitively expensive and difficult for all types of commercially available beam end connectors. A suitable solution to derive a particular uniform M-θ relationship for each connection type in terms of geometric parameters may be achieved through finite element (FE) modeling. This study first presents a comprehensive description of the experimental investigations that were performed and used as the calibration bases for the numerical study that constituted its main contribution. A three dimensioned (3D) non-linear finite element (FE) model was developed and calibrated against the experimental results. The FE model took into account material nonlinearities, geometrical properties and large displacements. Comparisons between numerical and experimental data for observed failure modes and M-θ relationship showed close agreement. The validated FE model was further extended to perform parametric analysis to identify the effects of various parameters which may affect the overall performance of the connection.
NASA Astrophysics Data System (ADS)
Pupillo, G.; Naldi, G.; Bianchi, G.; Mattana, A.; Monari, J.; Perini, F.; Poloni, M.; Schiaffino, M.; Bolli, P.; Lingua, A.; Aicardi, I.; Bendea, H.; Maschio, P.; Piras, M.; Virone, G.; Paonessa, F.; Farooqui, Z.; Tibaldi, A.; Addamo, G.; Peverini, O. A.; Tascone, R.; Wijnholds, S. J.
2015-06-01
One of the most challenging aspects of the new-generation Low-Frequency Aperture Array (LFAA) radio telescopes is instrument calibration. The operational LOw-Frequency ARray (LOFAR) instrument and the future LFAA element of the Square Kilometre Array (SKA) require advanced calibration techniques to reach the expected outstanding performance. In this framework, a small array, called Medicina Array Demonstrator (MAD), has been designed and installed in Italy to provide a test bench for antenna characterization and calibration techniques based on a flying artificial test source. A radio-frequency tone is transmitted through a dipole antenna mounted on a micro Unmanned Aerial Vehicle (UAV) (hexacopter) and received by each element of the array. A modern digital FPGA-based back-end is responsible for both data-acquisition and data-reduction. A simple amplitude and phase equalization algorithm is exploited for array calibration owing to the high stability and accuracy of the developed artificial test source. Both the measured embedded element patterns and calibrated array patterns are found to be in good agreement with the simulated data. The successful measurement campaign has demonstrated that a UAV-mounted test source provides a means to accurately validate and calibrate the full-polarized response of an antenna/array in operating conditions, including consequently effects like mutual coupling between the array elements and contribution of the environment to the antenna patterns. A similar system can therefore find a future application in the SKA-LFAA context.
Java-Library for the Access, Storage and Editing of Calibration Metadata of Optical Sensors
NASA Astrophysics Data System (ADS)
Firlej, M.; Kresse, W.
2016-06-01
The standardization of the calibration of optical sensors in photogrammetry and remote sensing has been discussed for more than a decade. Projects of the German DGPF and the European EuroSDR led to the abstract International Technical Specification ISO/TS 19159-1:2014 "Calibration and validation of remote sensing imagery sensors and data - Part 1: Optical sensors". This article presents the first software interface for a read- and write-access to all metadata elements standardized in the ISO/TS 19159-1. This interface is based on an xml-schema that was automatically derived by ShapeChange from the UML-model of the Specification. The software interface serves two cases. First, the more than 300 standardized metadata elements are stored individually according to the xml-schema. Secondly, the camera manufacturers are using many administrative data that are not a part of the ISO/TS 19159-1. The new software interface provides a mechanism for input, storage, editing, and output of both types of data. Finally, an output channel towards a usual calibration protocol is provided. The interface is written in Java. The article also addresses observations made when analysing the ISO/TS 19159-1 and compiles a list of proposals for maturing the document, i.e. for an updated version of the Specification.
Putnam, Jacob B; Somers, Jeffrey T; Wells, Jessica A; Perry, Chris E; Untaroiu, Costin D
2015-09-01
New vehicles are currently being developed to transport humans to space. During the landing phases, crewmembers may be exposed to spinal and frontal loading. To reduce the risk of injuries during these common impact scenarios, the National Aeronautics and Space Administration (NASA) is developing new safety standards for spaceflight. The Test Device for Human Occupant Restraint (THOR) advanced multi-directional anthropomorphic test device (ATD), with the National Highway Traffic Safety Administration modification kit, has been chosen to evaluate occupant spacecraft safety because of its improved biofidelity. NASA tested the THOR ATD at Wright-Patterson Air Force Base (WPAFB) in various impact configurations, including frontal and spinal loading. A computational finite element model (FEM) of the THOR to match these latest modifications was developed in LS-DYNA software. The main goal of this study was to calibrate and validate the THOR FEM for use in future spacecraft safety studies. An optimization-based method was developed to calibrate the material models of the lumbar joints and pelvic flesh. Compression test data were used to calibrate the quasi-static material properties of the pelvic flesh, while whole body THOR ATD kinematic and kinetic responses under spinal and frontal loading conditions were used for dynamic calibration. The performance of the calibrated THOR FEM was evaluated by simulating separate THOR ATD tests with different crash pulses along both spinal and frontal directions. The model response was compared with test data by calculating its correlation score using the CORrelation and Analysis rating system. The biofidelity of the THOR FEM was then evaluated against tests recorded on human volunteers under 3 different frontal and spinal impact pulses. The calibrated THOR FEM responded with high similarity to the THOR ATD in all validation tests. The THOR FEM showed good biofidelity relative to human-volunteer data under spinal loading, but limited biofidelity under frontal loading. This may suggest a need for further improvements in both the THOR ATD and FEM. Overall, results presented in this study provide confidence in the THOR FEM for use in predicting THOR ATD responses for conditions, such as those observed in spacecraft landing, and for use in evaluating THOR ATD biofidelity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Prediction of Fracture Behavior in Rock and Rock-like Materials Using Discrete Element Models
NASA Astrophysics Data System (ADS)
Katsaga, T.; Young, P.
2009-05-01
The study of fracture initiation and propagation in heterogeneous materials such as rock and rock-like materials are of principal interest in the field of rock mechanics and rock engineering. It is crucial to study and investigate failure prediction and safety measures in civil and mining structures. Our work offers a practical approach to predict fracture behaviour using discrete element models. In this approach, the microstructures of materials are presented through the combination of clusters of bonded particles with different inter-cluster particle and bond properties, and intra-cluster bond properties. The geometry of clusters is transferred from information available from thin sections, computed tomography (CT) images and other visual presentation of the modeled material using customized AutoCAD built-in dialog- based Visual Basic Application. Exact microstructures of the tested sample, including fractures, faults, inclusions and void spaces can be duplicated in the discrete element models. Although the microstructural fabrics of rocks and rock-like structures may have different scale, fracture formation and propagation through these materials are alike and will follow similar mechanics. Synthetic material provides an excellent condition for validating the modelling approaches, as fracture behaviours are known with the well-defined composite's properties. Calibration of the macro-properties of matrix material and inclusions (aggregates), were followed with the overall mechanical material responses calibration by adjusting the interfacial properties. The discrete element model predicted similar fracture propagation features and path as that of the real sample material. The path of the fractures and matrix-inclusion interaction was compared using computed tomography images. Initiation and fracture formation in the model and real material were compared using Acoustic Emission data. Analysing the temporal and spatial evolution of AE events, collected during the sample testing, in relation to the CT images allows the precise reconstruction of the failure sequence. Our proposed modelling approach illustrates realistic fracture formation and growth predictions at different loading conditions.
Ito, Masatomo; Suzuki, Tatsuya; Yada, Shuichi; Kusai, Akira; Nakagami, Hiroaki; Yonemochi, Etsuo; Terada, Katsuhide
2008-08-05
Using near-infrared (NIR) spectroscopy, an assay method which is not affected by such elements of tablet design as thickness, shape, embossing and scored line was developed. Tablets containing caffeine anhydrate were prepared by direct compression at various compression force levels using different shaped punches. NIR spectra were obtained from these intact tablets using the reflectance and transmittance techniques. A reference assay was performed by high-performance liquid chromatography (HPLC). Calibration models were generated by the partial least-squares (PLS) regression. Changes in the tablet thickness, shape, embossing and scored line caused NIR spectral changes in different ways, depending on the technique used. As a result, noticeable errors in drug content prediction occurred using calibration models generated according to the conventional method. On the other hand, when the various tablet design elements which caused the NIR spectral changes were included in the model, the prediction of the drug content in the tablets was scarcely affected by those elements when using either of the techniques. A comparison of these techniques resulted in higher predictability under the tablet design variations using the transmittance technique with preferable linearity and accuracy. This is probably attributed to the transmittance spectra which sensitively reflect the differences in tablet thickness or shape as a result of obtaining information inside the tablets.
Waveguide Calibrator for Multi-Element Probe Calibration
NASA Technical Reports Server (NTRS)
Sommerfeldt, Scott D.; Blotter, Jonathan D.
2007-01-01
A calibrator, referred to as the spider design, can be used to calibrate probes incorporating multiple acoustic sensing elements. The application is an acoustic energy density probe, although the calibrator can be used for other types of acoustic probes. The calibrator relies on the use of acoustic waveguide technology to produce the same acoustic field at each of the sensing elements. As a result, the sensing elements can be separated from each other, but still calibrated through use of the acoustic waveguides. Standard calibration techniques involve placement of an individual microphone into a small cavity with a known, uniform pressure to perform the calibration. If a cavity is manufactured with sufficient size to insert the energy density probe, it has been found that a uniform pressure field can only be created at very low frequencies, due to the size of the probe. The size of the energy density probe prevents one from having the same pressure at each microphone in a cavity, due to the wave effects. The "spider" design probe is effective in calibrating multiple microphones separated from each other. The spider design ensures that the same wave effects exist for each microphone, each with an indivdual sound path. The calibrator s speaker is mounted at one end of a 14-cm-long and 4.1-cm diameter small plane-wave tube. This length was chosen so that the first evanescent cross mode of the plane-wave tube would be attenuated by about 90 dB, thus leaving just the plane wave at the termination plane of the tube. The tube terminates with a small, acrylic plate with five holes placed symmetrically about the axis of the speaker. Four ports are included for the four microphones on the probe. The fifth port is included for the pre-calibrated reference microphone. The ports in the acrylic plate are in turn connected to the probe sensing elements via flexible PVC tubes. These five tubes are the same length, so the acoustic wave effects are the same in each tube. The flexible nature of the tubes allows them to be positioned so that each tube terminates at one of the microphones of the energy density probe, which is mounted in the acrylic structure, or the calibrated reference microphone. Tests performed verify that the pressure did not vary due to bends in the tubes. The results of these tests indicate that the average sound pressure level in the tubes varied by only 0.03 dB as the tubes were bent to various angles. The current calibrator design is effective up to a frequency of approximately 4.5 kHz. This upper design frequency is largely due to the diameter of the plane-wave tubes.
Prediction of Ba, Mn and Zn for tropical soils using iron oxides and magnetic susceptibility
NASA Astrophysics Data System (ADS)
Marques Júnior, José; Arantes Camargo, Livia; Reynaldo Ferracciú Alleoni, Luís; Tadeu Pereira, Gener; De Bortoli Teixeira, Daniel; Santos Rabelo de Souza Bahia, Angelica
2017-04-01
Agricultural activity is an important source of potentially toxic elements (PTEs) in soil worldwide but particularly in heavily farmed areas. Spatial distribution characterization of PTE contents in farming areas is crucial to assess further environmental impacts caused by soil contamination. Designing prediction models become quite useful to characterize the spatial variability of continuous variables, as it allows prediction of soil attributes that might be difficult to attain in a large number of samples through conventional methods. This study aimed to evaluate, in three geomorphic surfaces of Oxisols, the capacity for predicting PTEs (Ba, Mn, Zn) and their spatial variability using iron oxides and magnetic susceptibility (MS). Soil samples were collected from three geomorphic surfaces and analyzed for chemical, physical, mineralogical properties, as well as magnetic susceptibility (MS). PTE prediction models were calibrated by multiple linear regression (MLR). MLR calibration accuracy was evaluated using the coefficient of determination (R2). PTE spatial distribution maps were built using the values calculated by the calibrated models that reached the best accuracy by means of geostatistics. The high correlations between the attributes clay, MS, hematite (Hm), iron oxides extracted by sodium dithionite-citrate-bicarbonate (Fed), and iron oxides extracted using acid ammonium oxalate (Feo) with the elements Ba, Mn, and Zn enabled them to be selected as predictors for PTEs. Stepwise multiple linear regression showed that MS and Fed were the best PTE predictors individually, as they promoted no significant increase in R2 when two or more attributes were considered together. The MS-calibrated models for Ba, Mn, and Zn prediction exhibited R2 values of 0.88, 0.66, and 0.55, respectively. These are promising results since MS is a fast, cheap, and non-destructive tool, allowing the prediction of a large number of samples, which in turn enables detailed mapping of large areas. MS predicted values enabled the characterization and the understanding of spatial variability of the studied PTEs.
A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less
A Generic Modeling Approach to Biomass Dynamics of Sagittaria latifolia and Spartina alterniflora
2011-01-01
ammonium nitrate pulse of the growth and elemental composition of natural stands of Spartina alterniflora and Juncus roemerianus. American Journal of...calibration values become available. This modelling approach was applied to submersed aquatic vegetation (SAV) also (Best and Boyd 2008). The approach is... the models. The DVS is dimensionless and its value increases gradually within a growing season. The development rate (DVR) has the dimension d-1
Polarized-pixel performance model for DoFP polarimeter
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Liu, Haizheng; Liu, Li; Zhao, Yaohong; Zhang, Junchao
2018-06-01
A division of a focal plane (DoFP) polarimeter is manufactured by placing a micropolarizer array directly onto the focal plane array (FPA) of a detector. Each element of the DoFP polarimeter is a polarized pixel. This paper proposes a performance model for a polarized pixel. The proposed model characterizes the optical and electronic performance of a polarized pixel by three parameters. They are respectively major polarization responsivity, minor polarization responsivity and polarization orientation. Each parameter corresponds to an intuitive physical feature of a polarized pixel. This paper further extends this model to calibrate polarization images from a DoFP (division of focal plane) polarimeter. This calibration work is evaluated quantitatively by a developed DoFP polarimeter under varying illumination intensity and angle of linear polarization. The experiment proves that our model reduces nonuniformity to 6.79% of uncalibrated DoLP (degree of linear polarization) images, and significantly improves the visual effect of DoLP images.
Jaquez, Javier; Farrell, Mike; Huang, Haibo; ...
2016-08-01
In 2014/2015 at the Omega laser facility, several experiments took place to calibrate the National Ignition Facility (NIF) X-ray spectrometer (NXS), which is used for high-resolution time-resolved spectroscopic experiments at NIF. The spectrometer allows experimentalists to measure the X-ray energy emitted from high-energy targets, which is used to understand key data such as mixing of materials in highly compressed fuel. The purpose of the experiments at Omega was to obtain information on the instrument performance and to deliver an absolute photometric calibration of the NXS before it was deployed at NIF. The X-ray emission sources fabricated for instrument calibration weremore » 1-mm fused silica spheres with precisely known alloy composition coatings of Si/Ag/Mo, Ti/Cr/Ag, Cr/Ni/Zn, and Zn/Zr, which have emission in the 2- to 18-keV range. Critical to the spectrometer calibration is a known atomic composition of elements with low uncertainty for each calibration sphere. This study discusses the setup, fabrication, and precision metrology of these spheres as well as some interesting findings on the ternary magnetron-sputtered alloy structure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaquez, Javier; Farrell, Mike; Huang, Haibo
In 2014/2015 at the Omega laser facility, several experiments took place to calibrate the National Ignition Facility (NIF) X-ray spectrometer (NXS), which is used for high-resolution time-resolved spectroscopic experiments at NIF. The spectrometer allows experimentalists to measure the X-ray energy emitted from high-energy targets, which is used to understand key data such as mixing of materials in highly compressed fuel. The purpose of the experiments at Omega was to obtain information on the instrument performance and to deliver an absolute photometric calibration of the NXS before it was deployed at NIF. The X-ray emission sources fabricated for instrument calibration weremore » 1-mm fused silica spheres with precisely known alloy composition coatings of Si/Ag/Mo, Ti/Cr/Ag, Cr/Ni/Zn, and Zn/Zr, which have emission in the 2- to 18-keV range. Critical to the spectrometer calibration is a known atomic composition of elements with low uncertainty for each calibration sphere. This study discusses the setup, fabrication, and precision metrology of these spheres as well as some interesting findings on the ternary magnetron-sputtered alloy structure.« less
NASA Technical Reports Server (NTRS)
Rauch, T.; Rudkowski, A.; Kampka, D.; Werner, K.; Kruk, J. W.; Moehler, S.
2014-01-01
Context. In the framework of the Virtual Observatory (VO), the German Astrophysical VO (GAVO) developed the registered service TheoSSA (Theoretical Stellar Spectra Access). It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code, generally for all effective temperatures, surface gravities, and elemental compositions. We will establish a database of SEDs of flux standards that are easily accessible via TheoSSA's web interface. Aims. The OB-type subdwarf Feige 110 is a standard star for flux calibration. State-of-the-art non-local thermodynamic equilibrium stellar-atmosphere models that consider opacities of species up to trans-iron elements will be used to provide a reliable synthetic spectrum to compare with observations. Methods. In case of Feige 110, we demonstrate that the model reproduces not only its overall continuum shape from the far-ultraviolet (FUV) to the optical wavelength range but also the numerous metal lines exhibited in its FUV spectrum. Results. We present a state-of-the-art spectral analysis of Feige 110. We determined Teff =47 250 +/- 2000 K, log g=6.00 +/- 0.20, and the abundances of He, N, P, S, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, and Ge. Ti, V, Mn, Co, Zn, and Ge were identified for the first time in this star. Upper abundance limits were derived for C, O, Si, Ca, and Sc. Conclusions. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of astronomical data and cross-calibration between different instruments can be based on models and SEDs calculated with state-of-the-art model atmosphere codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastasi, A.; Basti, A.; Bedeschi, F.
We report the test of many of the key elements of the laser-based calibration system for muon g - 2 experiment E989 at Fermilab. The test was performed at the Laboratori Nazionali di Frascati's Beam Test Facility using a 450 MeV electron beam impinging on a small subset of the final g - 2 lead-fluoride crystal calorimeter system. The calibration system was configured as planned for the E989 experiment and uses the same type of laser and most of the final optical elements. We show results regarding the calorimeter's response calibration, the maximum equivalent electron energy which can be providedmore » by the laser and the stability of the calibration system components.« less
Finite element code development for modeling detonation of HMX composites
NASA Astrophysics Data System (ADS)
Duran, Adam; Sundararaghavan, Veera
2015-06-01
In this talk, we present a hydrodynamics code for modeling shock and detonation waves in HMX. A stable efficient solution strategy based on a Taylor-Galerkin finite element (FE) discretization was developed to solve the reactive Euler equations. In our code, well calibrated equations of state for the solid unreacted material and gaseous reaction products have been implemented, along with a chemical reaction scheme and a mixing rule to define the properties of partially reacted states. A linear Gruneisen equation of state was employed for the unreacted HMX calibrated from experiments. The JWL form was used to model the EOS of gaseous reaction products. It is assumed that the unreacted explosive and reaction products are in both pressure and temperature equilibrium. The overall specific volume and internal energy was computed using the rule of mixtures. Arrhenius kinetics scheme was integrated to model the chemical reactions. A locally controlled dissipation was introduced that induces a non-oscillatory stabilized scheme for the shock front. The FE model was validated using analytical solutions for sod shock and ZND strong detonation models and then used to perform 2D and 3D shock simulations. We will present benchmark problems for geometries in which a single HMX crystal is subjected to a shock condition. Our current progress towards developing microstructural models of HMX/binder composite will also be discussed.
NASA Astrophysics Data System (ADS)
Krüger, Harald; Stephan, Thomas; Engrand, Cécile; Briois, Christelle; Siljeström, Sandra; Merouane, Sihane; Baklouti, Donia; Fischer, Henning; Fray, Nicolas; Hornung, Klaus; Lehto, Harry; Orthous-Daunay, Francois-Régis; Rynö, Jouni; Schulz, Rita; Silén, Johan; Thirkell, Laurent; Trieloff, Mario; Hilchenbach, Martin
2015-11-01
COmetary Secondary Ion Mass Analyzer (COSIMA) is a time-of-flight secondary ion mass spectrometry (TOF-SIMS) instrument on board the Rosetta space mission. COSIMA has been designed to measure the composition of cometary dust particles. It has a mass resolution m/Δm of 1400 at mass 100 u, thus enabling the discrimination of inorganic mass peaks from organic ones in the mass spectra. We have evaluated the identification capabilities of the reference model of COSIMA for inorganic compounds using a suite of terrestrial minerals that are relevant for cometary science. Ground calibration demonstrated that the performances of the flight model were similar to that of the reference model. The list of minerals used in this study was chosen based on the mineralogy of meteorites, interplanetary dust particles and Stardust samples. It contains anhydrous and hydrous ferromagnesian silicates, refractory silicates and oxides (present in meteoritic Ca-Al-rich inclusions), carbonates, and Fe-Ni sulfides. From the analyses of these minerals, we have calculated relative sensitivity factors for a suite of major and minor elements in order to provide a basis for element quantification for the possible identification of major mineral classes present in the cometary particles.
Development and test of sets of 3D printed age-specific thyroid phantoms for 131I measurements
NASA Astrophysics Data System (ADS)
Beaumont, Tiffany; Caldeira Ideias, Pedro; Rimlinger, Maeva; Broggio, David; Franck, Didier
2017-06-01
In the case of a nuclear reactor accident the release contains a high proportion of iodine-131 that can be inhaled or ingested by members of the public. Iodine-131 is naturally retained in the thyroid and increases the thyroid cancer risk. Since the radiation induced thyroid cancer risk is greater for children than for adults, the thyroid dose to children should be assessed as accurately as possible. For that purpose direct measurements should be carried out with age-specific calibration factors but, currently, there is no age-specific thyroid phantoms allowing a robust measurement protocol. A set of age-specific thyroid phantoms for 5, 10, 15 year old children and for the adult has been designed and 3D printed. A realistic thyroid shape has been selected and material properties taken into account to simulate the attenuation of biological tissues. The thyroid volumes follow ICRP recommendations and the phantoms also include the trachea and a spine model. Several versions, with or without spine, with our without trachea, with or without age-specific neck have been manufactured, in order to study the influence of these elements on calibration factors. The calibration factor obtained with the adult phantom and a reference phantom are in reasonable agreement. In vivo calibration experiments with germanium detectors have shown that the difference in counting efficiency, the inverse of the calibration factor, between the 5 year and adult phantoms is 25% for measurement at contact. It is also experimentally evidenced that the inverse of the calibration factor varies linearly with the thyroid volume. The influence of scattering elements like the neck or spine is not evidenced by experimental measurements.
Anzano, Jesús M; Villoria, Mark A; Ruíz-Medina, Antonio; Lasheras, Roberto J
2006-08-11
A microscopic laser-induced breakdown spectrometer was used to evaluate the analytical matrix effect commonly observed in the analysis of geological materials. Samples were analyzed in either the powder or pressed pellet forms. Calibration curves of a number of iron and aluminum compounds showed a linear relationship between the elemental concentration and peak intensity. A direct determination of elemental content can thus be made from extrapolation on these calibration curves. To investigate matrix effects, synthetic model samples were prepared from various iron and aluminum compounds spiked with SiO2 and CaCO3. The addition of these matrices had a pronounced analytical effect on those compounds prepared as pressed pellets. However, results indicated the absence of matrix effects when the samples were presented to the laser as loose powders on tape and results were compared to certified values, indicating the reliability of this approach for accurate analysis, provided the sample particle diameters are greater than approximately 100 microm. Finally, the simultaneous analysis of two different elements was demonstrated using powders on tape.
NASA Astrophysics Data System (ADS)
Ilehag, R.; Schenk, A.; Hinz, S.
2017-08-01
This paper presents a concept for classification of facade elements, based on the material and the geometry of the elements in addition to the thermal radiation of the facade with the usage of a multimodal Unmanned Aerial Vehicle (UAV) system. Once the concept is finalized and functional, the workflow can be used for energy demand estimations for buildings by exploiting existing methods for estimation of heat transfer coefficient and the transmitted heat loss. The multimodal system consists of a thermal, a hyperspectral and an optical sensor, which can be operational with a UAV. While dealing with sensors that operate in different spectra and have different technical specifications, such as the radiometric and the geometric resolution, the challenges that are faced are presented. Addressed are the different approaches of data fusion, such as image registration, generation of 3D models by performing image matching and the means for classification based on either the geometry of the object or the pixel values. As a first step towards realizing the concept, the result from a geometric calibration with a designed multimodal calibration pattern is presented.
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xu; Zhu, Shanan; He, Bin
2009-05-01
Magnetoacoustic tomography with magnetic induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, a three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulae describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for the model calibration and evaluation of the corresponding acoustic field.
Li, Xun; Li, Xu; Zhu, Shanan; He, Bin
2010-01-01
Magnetoacoustic Tomography with Magnetic Induction (MAT-MI) is a recently proposed imaging modality to image the electrical impedance of biological tissue. It combines the good contrast of electrical impedance tomography with the high spatial resolution of sonography. In this paper, three-dimensional MAT-MI forward problem was investigated using the finite element method (FEM). The corresponding FEM formulas describing the forward problem are introduced. In the finite element analysis, magnetic induction in an object with conductivity values close to biological tissues was first carried out. The stimulating magnetic field was simulated as that generated from a three-dimensional coil. The corresponding acoustic source and field were then simulated. Computer simulation studies were conducted using both concentric and eccentric spherical conductivity models with different geometric specifications. In addition, the grid size for finite element analysis was evaluated for model calibration and evaluation of the corresponding acoustic field. PMID:19351978
NASA Astrophysics Data System (ADS)
Rauch, T.
2016-05-01
Theoretical spectral energy distributions (SEDs) of white dwarfs provide a powerful tool for cross-calibration and sensitivity control of instruments from the far infrared to the X-ray energy range. Such SEDs can be calculated from fully metal-line blanketed NLTE model-atmospheres that are e.g. computed by the Tübingen NLTE Model-Atmosphere Package (TMAP) that has arrived at a high level of sophistication. TMAP was successfully employed for the reliable spectral analysis of many hot, compact post-AGB stars. High-quality stellar spectra obtained over a wide energy range establish a data base with a large number of spectral lines of many successive ions of different species. Their analysis allows to determine effective temperatures, surface gravities, and element abundances of individual (pre-)white dwarfs with very small error ranges. We present applications of TMAP SEDs for spectral analyses of hot, compact stars in the parameter range from (pre-) white dwarfs to neutron stars and demonstrate the improvement of flux calibration using white-dwarf SEDs that are e.g. available via registered services in the Virtual Observatory.
Modeling of reservoir compaction and surface subsidence at South Belridge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.S.; Chan, C.K.; Prats, M.
1995-08-01
Finite-element models of depletion-induced reservoir compaction and surface subsidence have been calibrated with observed subsidence, locations of surface fissures, and regions of subsurface casing damage at South Belridge and used predictively for the evaluation of alternative reservoir-development plans. Pressure maintenance through diatomite waterflooding appears to be a beneficial means of minimizing additional subsidence and fissuring as well as reducing axial-compressive-type casing damage.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
NASA Astrophysics Data System (ADS)
Hailegeorgis, Teklu T.; Alfredsen, Knut; Abdella, Yisak S.; Kolberg, Sjur
2015-03-01
Identification of proper parameterizations of spatial heterogeneity is required for precipitation-runoff models. However, relevant studies with a specific aim at hourly runoff simulation in boreal mountainous catchments are not common. We conducted calibration and evaluation of hourly runoff simulation in a boreal mountainous watershed based on six different parameterizations of the spatial heterogeneity of subsurface storage capacity for a semi-distributed (subcatchments hereafter called elements) and distributed (1 × 1 km2 grid) setup. We evaluated representation of element-to-element, grid-to-grid, and probabilistic subcatchment/subbasin, subelement and subgrid heterogeneities. The parameterization cases satisfactorily reproduced the streamflow hydrographs with Nash-Sutcliffe efficiency values for the calibration and validation periods up to 0.84 and 0.86 respectively, and similarly for the log-transformed streamflow up to 0.85 and 0.90. The parameterizations reproduced the flow duration curves, but predictive reliability in terms of quantile-quantile (Q-Q) plots indicated marked over and under predictions. The simple and parsimonious parameterizations with no subelement or no subgrid heterogeneities provided equivalent simulation performance compared to the more complex cases. The results indicated that (i) identification of parameterizations require measurements from denser precipitation stations than what is required for acceptable calibration of the precipitation-streamflow relationships, (ii) there is challenges in the identification of parameterizations based on only calibration to catchment integrated streamflow observations and (iii) a potential preference for the simple and parsimonious parameterizations for operational forecast contingent on their equivalent simulation performance for the available input data. In addition, the effects of non-identifiability of parameters (interactions and equifinality) can contribute to the non-identifiability of the parameterizations.
Rapid analysis of pharmaceutical drugs using LIBS coupled with multivariate analysis.
Tiwari, P K; Awasthi, S; Kumar, R; Anand, R K; Rai, P K; Rai, A K
2018-02-01
Type 2 diabetes drug tablets containing voglibose having dose strengths of 0.2 and 0.3 mg of various brands have been examined, using laser-induced breakdown spectroscopy (LIBS) technique. The statistical methods such as the principal component analysis (PCA) and the partial least square regression analysis (PLSR) have been employed on LIBS spectral data for classifying and developing the calibration models of drug samples. We have developed the ratio-based calibration model applying PLSR in which relative spectral intensity ratios H/C, H/N and O/N are used. Further, the developed model has been employed to predict the relative concentration of element in unknown drug samples. The experiment has been performed in air and argon atmosphere, respectively, and the obtained results have been compared. The present model provides rapid spectroscopic method for drug analysis with high statistical significance for online control and measurement process in a wide variety of pharmaceutical industrial applications.
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Weltzien, Ingunn H.
2016-09-01
Snow is an important and complicated element in hydrological modelling. The traditional catchment hydrological model with its many free calibration parameters, also in snow sub-models, is not a well-suited tool for predicting conditions for which it has not been calibrated. Such conditions include prediction in ungauged basins and assessing hydrological effects of climate change. In this study, a new model for the spatial distribution of snow water equivalent (SWE), parameterized solely from observed spatial variability of precipitation, is compared with the current snow distribution model used in the operational flood forecasting models in Norway. The former model uses a dynamic gamma distribution and is called Snow Distribution_Gamma, (SD_G), whereas the latter model has a fixed, calibrated coefficient of variation, which parameterizes a log-normal model for snow distribution and is called Snow Distribution_Log-Normal (SD_LN). The two models are implemented in the parameter parsimonious rainfall-runoff model Distance Distribution Dynamics (DDD), and their capability for predicting runoff, SWE and snow-covered area (SCA) is tested and compared for 71 Norwegian catchments. The calibration period is 1985-2000 and validation period is 2000-2014. Results show that SDG better simulates SCA when compared with MODIS satellite-derived snow cover. In addition, SWE is simulated more realistically in that seasonal snow is melted out and the building up of "snow towers" and giving spurious positive trends in SWE, typical for SD_LN, is prevented. The precision of runoff simulations using SDG is slightly inferior, with a reduction in Nash-Sutcliffe and Kling-Gupta efficiency criterion of 0.01, but it is shown that the high precision in runoff prediction using SD_LN is accompanied with erroneous simulations of SWE.
Determination of Stark parameters by cross-calibration in a multi-element laser-induced plasma
NASA Astrophysics Data System (ADS)
Liu, Hao; Truscott, Benjamin S.; Ashfold, Michael N. R.
2016-05-01
We illustrate a Stark broadening analysis of the electron density Ne and temperature Te in a laser-induced plasma (LIP), using a model free of assumptions regarding local thermodynamic equilibrium (LTE). The method relies on Stark parameters determined also without assuming LTE, which are often unknown and unavailable in the literature. Here, we demonstrate that the necessary values can be obtained in situ by cross-calibration between the spectral lines of different charge states, and even different elements, given determinations of Ne and Te based on appropriate parameters for at least one observed transition. This approach enables essentially free choice between species on which to base the analysis, extending the range over which these properties can be measured and giving improved access to low-density plasmas out of LTE. Because of the availability of suitable tabulated values for several charge states of both Si and C, the example of a SiC LIP is taken to illustrate the consistency and accuracy of the procedure. The cross-calibrated Stark parameters are at least as reliable as values obtained by other means, offering a straightforward route to extending the literature in this area.
Fu, Hongbo; Wang, Huadong; Jia, Junwei; Ni, Zhibo; Dong, Fengzhong
2018-01-01
Due to the influence of major elements' self-absorption, scarce observable spectral lines of trace elements, and relative efficiency correction of experimental system, accurate quantitative analysis with calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is in fact not easy. In order to overcome these difficulties, standard reference line (SRL) combined with one-point calibration (OPC) is used to analyze six elements in three stainless-steel and five heat-resistant steel samples. The Stark broadening and Saha - Boltzmann plot of Fe are used to calculate the electron density and the plasma temperature, respectively. In the present work, we tested the original SRL method, the SRL with the OPC method, and intercept with the OPC method. The final calculation results show that the latter two methods can effectively improve the overall accuracy of quantitative analysis and the detection limits of trace elements.
NASA Astrophysics Data System (ADS)
Arabshahi, P.; Chao, Y.; Chien, S.; Gray, A.; Howe, B. M.; Roy, S.
2008-12-01
In many areas of Earth science, including climate change research, there is a need for near real-time integration of data from heterogeneous and spatially distributed sensors, in particular in-situ and space- based sensors. The data integration, as provided by a smart sensor web, enables numerous improvements, namely, 1) adaptive sampling for more efficient use of expensive space-based sensing assets, 2) higher fidelity information gathering from data sources through integration of complementary data sets, and 3) improved sensor calibration. The specific purpose of the smart sensor web development presented here is to provide for adaptive sampling and calibration of space-based data via in-situ data. Our ocean-observing smart sensor web presented herein is composed of both mobile and fixed underwater in-situ ocean sensing assets and Earth Observing System (EOS) satellite sensors providing larger-scale sensing. An acoustic communications network forms a critical link in the web between the in-situ and space-based sensors and facilitates adaptive sampling and calibration. After an overview of primary design challenges, we report on the development of various elements of the smart sensor web. These include (a) a cable-connected mooring system with a profiler under real-time control with inductive battery charging; (b) a glider with integrated acoustic communications and broadband receiving capability; (c) satellite sensor elements; (d) an integrated acoustic navigation and communication network; and (e) a predictive model via the Regional Ocean Modeling System (ROMS). Results from field experiments, including an upcoming one in Monterey Bay (October 2008) using live data from NASA's EO-1 mission in a semi closed-loop system, together with ocean models from ROMS, are described. Plans for future adaptive sampling demonstrations using the smart sensor web are also presented.
Wu, John Z; Pan, Christopher S; Wimer, Bryan M; Rosen, Charles L
2017-01-01
Traumatic brain injuries are among the most common severely disabling injuries in the United States. Construction helmets are considered essential personal protective equipment for reducing traumatic brain injury risks at work sites. In this study, we proposed a practical finite element modeling approach that would be suitable for engineers to optimize construction helmet design. The finite element model includes all essential anatomical structures of a human head (i.e. skin, scalp, skull, cerebrospinal fluid, brain, medulla, spinal cord, cervical vertebrae, and discs) and all major engineering components of a construction helmet (i.e. shell and suspension system). The head finite element model has been calibrated using the experimental data in the literature. It is technically difficult to precisely account for the effects of the neck and body mass on the dynamic responses, because the finite element model does not include the entire human body. An approximation approach has been developed to account for the effects of the neck and body mass on the dynamic responses of the head-brain. Using the proposed model, we have calculated the responses of the head-brain during a top impact when wearing a construction helmet. The proposed modeling approach would provide a tool to improve the helmet design on a biomechanical basis.
DETERMINATION OF ELEMENTAL COMPOSITIONS BY HIGH RESOLUTION MASS SPECTROMETRY WITHOUT MASS CALIBRANTS
Widely applicable mass calibrants, including perfluorokerosene, are available for gas-phase introduction of analytes ionized by electron impact (EI) prior to analysis using high resolution mass spectrometry. Unfortunately, no all-purpose calibrants are available for recently dev...
Establishment and correction of an Echelle cross-prism spectrogram reduction model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Bayanheshig; Li, Xiaotian; Cui, Jicheng
2017-11-01
The accuracy of an echelle cross-prism spectrometer depends on the matching degree between the spectrum reduction model and the actual state of the spectrometer. However, the error of adjustment can change the actual state of the spectrometer and result in a reduction model that does not match. This produces an inaccurate wavelength calibration. Therefore, the calibration of a spectrogram reduction model is important for the analysis of any echelle cross-prism spectrometer. In this study, the spectrogram reduction model of an echelle cross-prism spectrometer was established. The image position laws of a spectrometer that varies with the system parameters were simulated to the influence of the changes in prism refractive index, focal length and so on, on the calculation results. The model was divided into different wavebands. The iterative method, least squares principle and element lamps with known characteristic wavelength were used to calibrate the spectral model in different wavebands to obtain the actual values of the system parameters. After correction, the deviation of actual x- and y-coordinates and the coordinates calculated by the model are less than one pixel. The model corrected by this method thus reflects the system parameters in the current spectrometer state and can assist in accurate wavelength extraction. The instrument installation and adjustment would be guided in model-repeated correction, reducing difficulty of equipment, respectively.
Optical eye simulator for laser dazzle events.
Coelho, João M P; Freitas, José; Williamson, Craig A
2016-03-20
An optical simulator of the human eye and its application to laser dazzle events are presented. The simulator combines optical design software (ZEMAX) with a scientific programming language (MATLAB) and allows the user to implement and analyze a dazzle scenario using practical, real-world parameters. Contrary to conventional analytical glare analysis, this work uses ray tracing and the scattering model and parameters for each optical element of the eye. The theoretical background of each such element is presented in relation to the model. The overall simulator's calibration, validation, and performance analysis are achieved by comparison with a simpler model based uponCIE disability glare data. Results demonstrate that this kind of advanced optical eye simulation can be used to represent laser dazzle and has the potential to extend the range of applicability of analytical models.
NASA Astrophysics Data System (ADS)
Fu, Xiaoting; Bressan, Alessandro; Marigo, Paola; Girardi, Léo; Montalbán, Josefina; Chen, Yang; Nanni, Ambra
2018-05-01
Precise studies on the Galactic bulge, globular cluster, Galactic halo, and Galactic thick disc require stellar models with α enhancement and various values of helium content. These models are also important for extra-Galactic population synthesis studies. For this purpose, we complement the existing PARSEC models, which are based on the solar partition of heavy elements, with α-enhanced partitions. We collect detailed measurements on the metal mixture and helium abundance for the two populations of 47 Tuc (NGC 104) from the literature, and calculate stellar tracks and isochrones with these α-enhanced compositions. By fitting the precise colour-magnitude diagram with HST ACS/WFC data, from low main sequence till horizontal branch (HB), we calibrate some free parameters that are important for the evolution of low mass stars like the mixing at the bottom of the convective envelope. This new calibration significantly improves the prediction of the red giant branch bump (RGBB) brightness. Comparison with the observed RGB and HB luminosity functions also shows that the evolutionary lifetimes are correctly predicted. As a further result of this calibration process, we derive the age, distance modulus, reddening, and the RGB mass-loss for 47 Tuc. We apply the new calibration and α-enhanced mixtures of the two 47 Tuc populations ([α/Fe] ˜ 0.4 and 0.2) to other metallicities. The new models reproduce the RGB bump observations much better than previous models. This new PARSEC data base, with the newly updated α-enhanced stellar evolutionary tracks and isochrones, will also be a part of the new stellar products for Gaia.
NASA Astrophysics Data System (ADS)
Li, N.; Yue, X. Y.
2018-03-01
Macroscopic root water uptake models proportional to a root density distribution function (RDDF) are most commonly used to model water uptake by plants. As the water uptake is difficult and labor intensive to measure, these models are often calibrated by inverse modeling. Most previous inversion studies assume RDDF to be constant with depth and time or dependent on only depth for simplification. However, under field conditions, this function varies with type of soil and root growth and thus changes with both depth and time. This study proposes an inverse method to calibrate both spatially and temporally varying RDDF in unsaturated water flow modeling. To overcome the difficulty imposed by the ill-posedness, the calibration is formulated as an optimization problem in the framework of the Tikhonov regularization theory, adding additional constraint to the objective function. Then the formulated nonlinear optimization problem is numerically solved with an efficient algorithm on the basis of the finite element method. The advantage of our method is that the inverse problem is translated into a Tikhonov regularization functional minimization problem and then solved based on the variational construction, which circumvents the computational complexity in calculating the sensitivity matrix involved in many derivative-based parameter estimation approaches (e.g., Levenberg-Marquardt optimization). Moreover, the proposed method features optimization of RDDF without any prior form, which is applicable to a more general root water uptake model. Numerical examples are performed to illustrate the applicability and effectiveness of the proposed method. Finally, discussions on the stability and extension of this method are presented.
General Nonlinear Ferroelectric Model v. Beta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Wen; Robbins, Josh
2017-03-14
The purpose of this software is to function as a generalized ferroelectric material model. The material model is designed to work with existing finite element packages by providing updated information on material properties that are nonlinear and dependent on loading history. The two major nonlinear phenomena this model captures are domain-switching and phase transformation. The software itself does not contain potentially sensitive material information and instead provides a framework for different physical phenomena observed within ferroelectric materials. The model is calibrated to a specific ferroelectric material through input parameters provided by the user.
Monitoring of toxic elements present in sludge of industrial waste using CF-LIBS.
Kumar, Rohit; Rai, Awadhesh K; Alamelu, Devanathan; Aggarwal, Suresh K
2013-01-01
Industrial waste is one of the main causes of environmental pollution. Laser-induced breakdown spectroscopy (LIBS) was applied to detect the toxic metals in the sludge of industrial waste water. Sludge on filter paper was obtained after filtering the collected waste water samples from different sections of a water treatment plant situated in an industrial area of Kanpur City. The LIBS spectra of the sludge samples were recorded in the spectral range of 200 to 500 nm by focusing the laser light on sludge. Calibration-free laser-induced breakdown spectroscopy (CF-LIBS) technique was used for the quantitative measurement of toxic elements such as Cr and Pb present in the sample. We also used the traditional calibration curve approach to quantify these elements. The results obtained from CF-LIBS are in good agreement with the results from the calibration curve approach. Thus, our results demonstrate that CF-LIBS is an appropriate technique for quantitative analysis where reference/standard samples are not available to make the calibration curve. The results of the present experiment are alarming to the people living nearby areas of industrial activities, as the concentrations of toxic elements are quite high compared to the admissible limits of these substances.
An Integrated Finite Element-based Simulation Framework: From Hole Piercing to Hole Expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiaohua; Sun, Xin; Golovashchenko, Segey F.
An integrated finite element-based modeling framework is developed to predict the hole expansion ratio (HER) of AA6111-T4 sheet by considering the piercing-induced damages around the hole edge. Using damage models and parameters calibrated from previously reported tensile stretchability studies, the predicted HER correlates well with experimentally measured HER values for different hole piercing clearances. The hole piercing model shows burrs are not generated on the sheared surface for clearances less than 20%, which corresponds well with the experimental data on pierced holes cross-sections. Finite-element-calculated HER also is not especially sensitive to piercing clearances less than this value. However, as clearancesmore » increase to 30% and further to 40%, the HER values are predicted to be considerably smaller, also consistent with experimental measurements. Upon validation, the integrated modeling framework is used to examine the effects of different hole piercing and hole expansion conditions on the critical HERs for AA6111-T4.« less
Contribution of crosstalk to the uncertainty of electrostatic actuator calibrations.
Shams, Qamar A; Soto, Hector L; Zuckerwar, Allan J
2009-09-01
Crosstalk in electrostatic actuator calibrations is defined as the ratio of the microphone response to the actuator excitation voltage at a given frequency with the actuator polarization voltage turned off to the response, at the excitation frequency, with the polarization voltage turned on. It consequently contributes to the uncertainty of electrostatic actuator calibrations. Two sources of crosstalk are analyzed: the first attributed to the stray capacitance between the actuator electrode and the microphone backplate, and the second to the ground resistance appearing as a common element in the actuator excitation and microphone input loops. Measurements conducted on 1/4, 1/2, and 1 in. air condenser microphones reveal that the crosstalk has no frequency dependence up to the membrane resonance frequency and that the level of crosstalk lies at about -60 dB for all three microphones-conclusions that are consistent with theory. The measurements support the stray capacitance model. The contribution of crosstalk to the measurement standard uncertainty of an electrostatic actuator calibration is therewith 0.01 dB.
A Deep Learning Approach to LIBS Spectroscopy for Planetary Applications
NASA Astrophysics Data System (ADS)
Mullen, T. H.; Parente, M.; Gemp, I.; Dyar, M. D.
2017-12-01
The ChemCam instrument on the Curiousity rover has collected >440,000 laser-induced breakdown spectra (LIBS) from 1500 different geological targets since 2012. The team is using a pipeline of preprocessing and partial least squares techniques to predict compositions of surface materials [1]. Unfortunately, such multivariate techniques are plagued by hard-to-meet assumptions involving constant hyperparameter tuning to specific elements and the amount of training data available; if the whole distribution of data is not seen, the method will overfit to the training data and generalizability will suffer. The rover only has 10 calibration targets on-board that represent a small subset of the geochemical samples the rover is expected to investigate. Deep neural networks have been used to bypass these issues in other fields. Semi-supervised techniques allow researchers to utilized small labeled datasets and vast amounts of unlabeled data. One example is the variational autoencoder model, a semi-supervised generative model in the form of a deep neural network. The autoencoder assumes that LIBS spectra are generated from a distribution conditioned on the elemental compositions in the sample and some nuisance. The system is broken into two models: one that predicts elemental composition from the spectra and one that generates spectra from compositions that may or may not be seen in the training set. The synthesized spectra show strong agreement with geochemical conventions to express specific compositions. The predictions of composition show improved generalizability to PLS. Deep neural networks have also been used to transfer knowledge from one dataset to another to solve unlabeled data problems. Given that vast amounts of laboratry LIBS spectra have been obtained in the past few years, it is now feasible train a deep net to predict elemental composition from lab spectra. Transfer learning (manifold alignment or calibration transfer) [2] is then used to fine-tune the model from terrestrial lab data to Martian field data. Neural networks and generative models provide the flexibility need for elemental composition prediction and unseen spectra synthesis. [1] Clegg S. et al. (2016) Spectrochim. Acta B, 129, 64-85. [2] Boucher T. et al. (2017) J. Chemom., 31, e2877.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...
2018-05-01
We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less
FATE 5: A natural attenuation calibration tool for groundwater fate and transport modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nevin, J.P.; Connor, J.A.; Newell, C.J.
1997-12-31
A new groundwater attenuation modeling tool (FATE 5) has been developed to assist users with determining site-specific natural attenuation rates for organic constituents dissolved in groundwater. FATE 5 is based on and represents an enhancement to the Domenico analytical groundwater transport model. These enhancements include use of an optimization routine to match results from the Domenico model to actual measured site concentrations, an extensive database of chemical property data, and calculation of an estimate of the length of time needed for a plume to reach steady state conditions. FATE 5 was developed in Microsoft{reg_sign} Excel and is controlled by meansmore » of a simple, user-friendly graphical interface. Using the Solver routine built into Excel, FATE 5 is able to calibrate the attenuation rate used by the Domenico model to match site-specific data. By calibrating the decay rate to site-specific measurements, FATE 5 can yield accurate predictions of long-term natural attenuation processes within a groundwater within a groundwater plume. In addition, FATE 5 includes a formulation of the transient Domenico solution used to help the user determine if the steady-state assumptions employed by the model are appropriate. The calibrated groundwater flow model can then be used either to (i) predict upper-bound constituent concentrations in groundwater, based on an observed source zone concentration, or (ii) back-calculate a lower-bound SSTL value, based on a user-specified exposure point concentration at the groundwater point of exposure (POE). This paper reviews the major elements of the FATE 5 model - and gives results for real-world applications. Key modeling assumptions and summary guidelines regarding calculation procedures and input parameter selection are also addressed.« less
Turk, G C; Yu, L L; Salit, M L; Guthrie, W F
2001-06-01
Multielement analyses of environmental reference materials have been performed using existing certified reference materials (CRMs) as calibration standards for inductively coupled plasma-mass spectrometry. The analyses have been performed using a high-performance methodology that results in comparison measurement uncertainties that are significantly less than the uncertainties of the certified values of the calibration CRM. Consequently, the determined values have uncertainties that are very nearly equivalent to the uncertainties of the calibration CRM. Several uses of this calibration transfer are proposed, including, re-certification measurements of replacement CRMs, establishing traceability of one CRM to another, and demonstrating the equivalence of two CRMs. RM 8704, a river sediment, was analyzed using SRM 2704, Buffalo River Sediment, as the calibration standard. SRM 1632c, Trace Elements in Bituminous Coal, which is a replacement for SRM 1632b, was analyzed using SRM 1632b as the standard. SRM 1635, Trace Elements in Subbituminous Coal, was also analyzed using SRM 1632b as the standard.
Peng, Jiyu; He, Yong; Ye, Lanhan; Shen, Tingting; Liu, Fei; Kong, Wenwen; Liu, Xiaodan; Zhao, Yun
2017-07-18
Fast detection of heavy metals in plant materials is crucial for environmental remediation and ensuring food safety. However, most plant materials contain high moisture content, the influence of which cannot be simply ignored. Hence, we proposed moisture influence reducing method for fast detection of heavy metals using laser-induced breakdown spectroscopy (LIBS). First, we investigated the effect of moisture content on signal intensity, stability, and plasma parameters (temperature and electron density) and determined the main influential factors (experimental parameters F and the change of analyte concentration) on the variations of signal. For chromium content detection, the rice leaves were performed with a quick drying procedure, and two strategies were further used to reduce the effect of moisture content and shot-to-shot fluctuation. An exponential model based on the intensity of background was used to correct the actual element concentration in analyte. Also, the ratio of signal-to-background for univariable calibration and partial least squared regression (PLSR) for multivariable calibration were used to compensate the prediction deviations. The PLSR calibration model obtained the best result, with the correlation coefficient of 0.9669 and root-mean-square error of 4.75 mg/kg in the prediction set. The preliminary results indicated that the proposed method allowed for the detection of heavy metals in plant materials using LIBS, and it could be possibly used for element mapping in future work.
NASA Technical Reports Server (NTRS)
Putnam, J. B.; Unataroiu, C. D.; Somers, J. T.
2014-01-01
The THOR anthropomorphic test device (ATD) has been developed and continuously improved by the National Highway Traffic Safety Administration to provide automotive manufacturers an advanced tool that can be used to assess the injury risk of vehicle occupants in crash tests. Recently, a series of modifications were completed to improve the biofidelity of THOR ATD [1]. The updated THOR Modification Kit (THOR-K) ATD was employed at Wright-Patterson Air Base in 22 impact tests in three configurations: vertical, lateral, and spinal [2]. Although a computational finite element (FE) model of the THOR had been previously developed [3], updates to the model were needed to incorporate the recent changes in the modification kit. The main goal of this study was to develop and validate a FE model of the THOR-K ATD. The CAD drawings of the THOR-K ATD were reviewed and FE models were developed for the updated parts. For example, the head-skin geometry was found to change significantly, so its model was re-meshed (Fig. 1a). A protocol was developed to calibrate each component identified as key to the kinematic and kinetic response of the THOR-K head/neck ATD FE model (Fig. 1b). The available ATD tests were divided in two groups: a) calibration tests where the unknown material parameters of deformable parts (e.g., head skin, pelvis foam) were optimized to match the data and b) validation tests where the model response was only compared with test data by calculating their score using CORrelation and Analysis (CORA) rating system. Finally, the whole ATD model was validated under horizontal-, vertical-, and lateral-loading conditions against data recorded in the Wright Patterson tests [2]. Overall, the final THOR-K ATD model developed in this study is shown to respond similarly to the ATD in all validation tests. This good performance indicates that the optimization performed during calibration by using the CORA score as objective function is not test specific. Therefore confidence is provided in the ATD model for uses in predicting response in test conditions not performed in this study such those observed in the spacecraft landing. Comparison studies with ATD and human models may also be performed to contribute to future changes in THOR ATD design in an effort to improve its biofidelity, which has been traditionally based on post-mortem human subject testing and designer experience.
Use of a Stanton Tube for Skin-Friction Measurements
NASA Technical Reports Server (NTRS)
Abarbanel, S. S.; Hakkinen, R. J.; Trilling, L.
1959-01-01
A small total-pressure tube resting against a flat-plate surface was used as a Stanton tube and calibrated as a skin-friction meter at various subsonic and supersonic speeds. Laminar flow was maintained for the supersonic runs at a Mach number M(sub infinity) of 2. At speeds between M(sub infinity) = 1.33 and M(sub infinity) = 1.87, the calibrations were carried-out in a turbulent boundary layer. The subsonic flows were found to be in transition. The skin-friction readings of a floating-element type of balance served as the reference values against which the Stanton tube was calibrated. A theoretical model was developed which, for moderate values of the shear parameter tau, accurately predicts the performance of the Stanton tube in subsonic and supersonic flows. A "shear correction factor" was found to explain the deviations from the basic model when T became too large. Compressibility effects were important only in the case of turbulent supersonic flows, and they did not alter the form of the calibration curve. The test Reynolds numbers, based on the distance from the leading edge and free-stream conditions, ranged from 70,000 to 875,000. The turbulent-boundary-layer Reynolds numbers, based on momentum thickness, varied between 650 and 2,300. Both laminar and turbulent velocity profiles were taken and the effect of pressure gradient on the calibration was investigated.
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; ...
2018-02-13
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
NASA Astrophysics Data System (ADS)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.; Garlea, E.
2018-03-01
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising copper and aluminum alloys and data were collected from the samples' surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectra were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument's ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in situ, as a starting point for undertaking future complex material characterization work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, B. N.; Martin, M. Z.; Leonard, D. N.
Handheld laser-induced breakdown spectroscopy (HH LIBS) was used to study the elemental composition of four copper alloys and four aluminum alloys to produce calibration curves. The HH LIBS instrument used is a SciAps Z-500, commercially available, that contains a class-1 solid-state laser with an output wavelength of 1532 nm, a laser energy of 5 mJ/pulse, and a pulse duration of 5 ns. Test samples were solid specimens comprising of copper and aluminum alloys and data were collected from the samples’ surface at three different locations, employing a 12-point-grid pattern for each data set. All three data sets of the spectramore » were averaged, and the intensity, corrected by subtraction of background, was used to produce the elemental calibration curves. Calibration curves are presented for the matrix elements, copper and aluminum, as well as several minor elements. The surface damage produced by the laser was examined by microscopy. The alloys were tested in air and in a glovebox to evaluate the instrument’s ability to identify the constituents within materials under different environmental conditions. The main objective of using this HH LIBS technology is to determine its capability to fingerprint the presence of certain elements related to subpercent level within materials in real time and in-situ, as a starting point for undertaking future complex material characterization work.« less
PINS chemical identification software
Caffrey, Augustine J.; Krebs, Kennth M.
2004-09-14
An apparatus and method for identifying a chemical compound. A neutron source delivers neutrons into the chemical compound. The nuclei of chemical elements constituting the chemical compound emit gamma rays upon interaction with the neutrons. The gamma rays are characteristic of the chemical elements constituting the chemical compound. A spectrum of the gamma rays is generated having a detection count and an energy scale. The energy scale is calibrated by comparing peaks in the spectrum to energies of pre-selected chemical elements in the spectrum. A least-squares fit completes the calibration. The chemical elements constituting the chemical compound can be readily determined, which then allows for identification of the chemical compound.
In situ calibration of inductively coupled plasma-atomic emission and mass spectroscopy
Braymen, Steven D.
1996-06-11
A method and apparatus for in situ addition calibration of an inductively coupled plasma atomic emission spectrometer or mass spectrometer using a precision gas metering valve to introduce a volatile calibration gas of an element of interest directly into an aerosol particle stream. The present situ calibration technique is suitable for various remote, on-site sampling systems such as laser ablation or nebulization.
40 CFR 1065.330 - Exhaust-flow calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Calibrations and Verifications Flow-Related Measurements § 1065.330... use other reference meters such as laminar flow elements, which are not commonly designed to withstand...
Method and Apparatus for Accurately Calibrating a Spectrometer
NASA Technical Reports Server (NTRS)
Youngquist, Robert C. (Inventor); Simmons, Stephen M. (Inventor)
2013-01-01
A calibration assembly for a spectrometer is provided. The assembly includes a spectrometer having n detector elements, where each detector element is assigned a predetermined wavelength value. A first source emitting first radiation is used to calibrate the spectrometer. A device is placed in the path of the first radiation to split the first radiation into a first beam and a second beam. The assembly is configured so that one of the first and second beams travels a path-difference distance longer than the other of the first and second beams. An output signal is generated by the spectrometer when the first and second beams enter the spectrometer. The assembly includes a controller operable for processing the output signal and adapted to calculate correction factors for the respective predetermined wavelength values assigned to each detector element.
Merei, Bilal; Badel, Pierre; Davis, Lindsey; Sutton, Michael A; Avril, Stéphane; Lessner, Susan M
2017-03-01
Finite element analyses using cohesive zone models (CZM) can be used to predict the fracture of atherosclerotic plaques but this requires setting appropriate values of the model parameters. In this study, material parameters of a CZM were identified for the first time on two groups of mice (ApoE -/- and ApoE -/- Col8 -/- ) using the measured force-displacement curves acquired during delamination tests. To this end, a 2D finite-element model of each plaque was solved using an explicit integration scheme. Each constituent of the plaque was modeled with a neo-Hookean strain energy density function and a CZM was used for the interface. The model parameters were calibrated by minimizing the quadratic deviation between the experimental force displacement curves and the model predictions. The elastic parameter of the plaque and the CZM interfacial parameter were successfully identified for a cohort of 11 mice. The results revealed that only the elastic parameter was significantly different between the two groups, ApoE -/- Col8 -/- plaques being less stiff than ApoE -/- plaques. Finally, this study demonstrated that a simple 2D finite element model with cohesive elements can reproduce fairly well the plaque peeling global response. Future work will focus on understanding the main biological determinants of regional and inter-individual variations of the material parameters used in the model. Copyright © 2016 Elsevier Ltd. All rights reserved.
3D aquifer characterization using stochastic streamline calibration
NASA Astrophysics Data System (ADS)
Jang, Minchul
2007-03-01
In this study, a new inverse approach, stochastic streamline calibration is proposed. Using both a streamline concept and a stochastic technique, stochastic streamline calibration optimizes an identified field to fit in given observation data in a exceptionally fast and stable fashion. In the stochastic streamline calibration, streamlines are adopted as basic elements not only for describing fluid flow but also for identifying the permeability distribution. Based on the streamline-based inversion by Agarwal et al. [Agarwal B, Blunt MJ. Streamline-based method with full-physics forward simulation for history matching performance data of a North sea field. SPE J 2003;8(2):171-80], Wang and Kovscek [Wang Y, Kovscek AR. Streamline approach for history matching production data. SPE J 2000;5(4):353-62], permeability is modified rather along streamlines than at the individual gridblocks. Permeabilities in the gridblocks which a streamline passes are adjusted by being multiplied by some factor such that we can match flow and transport properties of the streamline. This enables the inverse process to achieve fast convergence. In addition, equipped with a stochastic module, the proposed technique supportively calibrates the identified field in a stochastic manner, while incorporating spatial information into the field. This prevents the inverse process from being stuck in local minima and helps search for a globally optimized solution. Simulation results indicate that stochastic streamline calibration identifies an unknown permeability exceptionally quickly. More notably, the identified permeability distribution reflected realistic geological features, which had not been achieved in the original work by Agarwal et al. with the limitations of the large modifications along streamlines for matching production data only. The constructed model by stochastic streamline calibration forecasted transport of plume which was similar to that of a reference model. By this, we can expect the proposed approach to be applied to the construction of an aquifer model and forecasting of the aquifer performances of interest.
Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming
2007-01-01
This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.
NASA Astrophysics Data System (ADS)
Jansen Van Rensburg, G. J.; Kok, S.; Wilke, D. N.
2017-10-01
Different roll pass reduction schedules have different effects on the through-thickness properties of hot-rolled metal slabs. In order to assess or improve a reduction schedule using the finite element method, a material model is required that captures the relevant deformation mechanisms and physics. The model should also report relevant field quantities to assess variations in material state through the thickness of a simulated rolled metal slab. In this paper, a dislocation density-based material model with recrystallization is presented and calibrated on the material response of a high-strength low-alloy steel. The model has the ability to replicate and predict material response to a fair degree thanks to the physically motivated mechanisms it is built on. An example study is also presented to illustrate the possible effect different reduction schedules could have on the through-thickness material state and the ability to assess these effects based on finite element simulations.
Assessment of the viscoelastic mechanical properties of polycarbonate urethane for medical devices.
Beckmann, Agnes; Heider, Yousef; Stoffel, Marcus; Markert, Bernd
2018-06-01
The underlying research work introduces a study of the mechanical properties of polycarbonate urethane (PCU), used in the construction of various medical devices. This comprises the discussion of a suitable material model, the application of elemental experiments to identify the related parameters and the numerical simulation of the applied experiments in order to calibrate and validate the mathematical model. In particular, the model of choice for the simulation of PCU response is the non-linear viscoelastic Bergström-Boyce material model, applied in the finite-element (FE) package Abaqus®. For the parameter identification, uniaxial tension and unconfined compression tests under in-laboratory physiological conditions were carried out. The geometry of the samples together with the applied loadings were simulated in Abaqus®, to insure the suitability of the modelling approach. The obtained parameters show a very good agreement between the numerical and the experimental results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Mitten, H.T.; Lines, G.C.; Berenbrock, Charles; Durbin, T.J.
1988-01-01
Because of the imbalance between recharge and pumpage, groundwater levels declined as much as 100 ft in some areas of Borrego Valley, California during drinking 1945-80. As an aid to analyzing the effects of pumping on the groundwater system, a three-dimensional finite-element groundwater flow model was developed. The model was calibrated for both steady-state (1945) and transient-state (1946-79) conditions. For the steady-state calibration, hydraulic conductivities of the three aquifers were varied within reasonable limits to obtain an acceptable match between measured and computed hydraulic heads. Recharge from streamflow infiltration (4,800 acre-ft/yr) was balanced by computed evapotranspiration (3,900 acre-ft/yr) and computed subsurface outflow from the model area (930 acre-ft/yr). For the transient state calibration, the volumes and distribution of net groundwater pumpage were estimated from land-use data and estimates of consumptive use for irrigated crops. The pumpage was assigned to the appropriate nodes in the model for each of seventeen 2-year time steps representing the period 1946-79. The specific yields of the three aquifers were varied within reasonable limits to obtain an acceptable match between measured and computed hydraulic heads. Groundwater pumpage input to the model was compensated by declines in both the computed evapotranspiration and the amount of groundwater in storage. (USGS)
Kumar, A.; Kalnaus, Sergiy; Simunovic, Srdjan; ...
2016-09-12
We performed finite element simulations of spherical indentation of Li-ion pouch cells. Our model fully resolves different layers in the cell. The results of the layer resolved models were compared to the models available in the literature that treat the cell as an equivalent homogenized continuum material. Simulations were carried out for different sizes of the spherical indenter. Here, we show that calibration of a failure criterion for the cell in the homogenized model depends on the indenter size, whereas in the layer-resoled model, such dependency is greatly diminished.
NASA Astrophysics Data System (ADS)
Tuca, Silviu-Sorin; Badino, Giorgio; Gramse, Georg; Brinciotti, Enrico; Kasper, Manuel; Oh, Yoo Jin; Zhu, Rong; Rankl, Christian; Hinterdorfer, Peter; Kienberger, Ferry
2016-04-01
The application of scanning microwave microscopy (SMM) to extract calibrated electrical properties of cells and bacteria in air is presented. From the S 11 images, after calibration, complex impedance and admittance images of Chinese hamster ovary cells and E. coli bacteria deposited on a silicon substrate have been obtained. The broadband capabilities of SMM have been used to characterize the bio-samples between 2 GHz and 20 GHz. The resulting calibrated cell and bacteria admittance at 19 GHz were Y cell = 185 μS + j285 μS and Y bacteria = 3 μS + j20 μS, respectively. A combined circuitry-3D finite element method EMPro model has been developed and used to investigate the frequency response of the complex impedance and admittance of the SMM setup. Based on a proposed parallel resistance-capacitance model, the equivalent conductance and parallel capacitance of the cells and bacteria were obtained from the SMM images. The influence of humidity and frequency on the cell conductance was experimentally studied. To compare the cell conductance with bulk water properties, we measured the imaginary part of the bulk water loss with a dielectric probe kit in the same frequency range resulting in a high level of agreement.
Performance Report: A timeline for the synchrotron calibration of AXAF
NASA Technical Reports Server (NTRS)
Tananbaum, H. D.; Graessle, D.
1994-01-01
Presented herein are the known elements of the timeline for synchrotron reflectance calibrations of HRMA witness samples (Section 2). In Section 3, lists of measurements to be done on each witness flat are developed. The elements are then arranged into timelines for the three beamlines we expect to employ in covering the full 50-12,000 eV energy range (Section 4). Although the required AXAF operational range is only 0.1-10 keV, we must calibrate the extent to which radiation just outside this band may contaminate our in-band response. In Section 5, we describe the working relationships which exist with each of the beamlines, and estimate the time available for AXAF measurements on each. From the timelines and the available time, we calculate the number of flats which could be measured in full detail over the duration of the program for each beamline. A suggestion is made regarding a minimum required baselines of witness flats from each element coating run or qualification run to be used in the calibration. We intend that this suggestion open discussion of the issue of witness flat deployment.
Measuring the nonlinear elastic properties of tissue-like phantoms.
Erkamp, Ramon Q; Skovoroda, Andrei R; Emelianov, Stanislav Y; O'Donnell, Matthew
2004-04-01
A direct mechanical system simultaneously measuring external force and deformation of samples over a wide dynamic range is used to obtain force-displacement curves of tissue-like phantoms under plain strain deformation. These measurements, covering a wide deformation range, then are used to characterize the nonlinear elastic properties of the phantom materials. The model assumes incompressible media, in which several strain energy potentials are considered. Finite-element analysis is used to evaluate the performance of this material characterization procedure. The procedures developed allow calibration of nonlinear elastic phantoms for elasticity imaging experiments and finite-element simulations.
Large-N correlator systems for low frequency radio astronomy
NASA Astrophysics Data System (ADS)
Foster, Griffin
Low frequency radio astronomy has entered a second golden age driven by the development of a new class of large-N interferometric arrays. The low frequency array (LOFAR) and a number of redshifted HI Epoch of Reionization (EoR) arrays are currently undergoing commission and regularly observing. Future arrays of unprecedented sensitivity and resolutions at low frequencies, such as the square kilometer array (SKA) and the hydrogen epoch of reionization array (HERA), are in development. The combination of advancements in specialized field programmable gate array (FPGA) hardware for signal processing, computing and graphics processing unit (GPU) resources, and new imaging and calibration algorithms has opened up the oft underused radio band below 300 MHz. These interferometric arrays require efficient implementation of digital signal processing (DSP) hardware to compute the baseline correlations. FPGA technology provides an optimal platform to develop new correlators. The significant growth in data rates from these systems requires automated software to reduce the correlations in real time before storing the data products to disk. Low frequency, widefield observations introduce a number of unique calibration and imaging challenges. The efficient implementation of FX correlators using FPGA hardware is presented. Two correlators have been developed, one for the 32 element BEST-2 array at Medicina Observatory and the other for the 96 element LOFAR station at Chilbolton Observatory. In addition, calibration and imaging software has been developed for each system which makes use of the radio interferometry measurement equation (RIME) to derive calibrations. A process for generating sky maps from widefield LOFAR station observations is presented. Shapelets, a method of modelling extended structures such as resolved sources and beam patterns has been adapted for radio astronomy use to further improve system calibration. Scaling of computing technology allows for the development of larger correlator systems, which in turn allows for improvements in sensitivity and resolution. This requires new calibration techniques which account for a broad range of systematic effects.
Method for in-situ restoration of plantinum resistance thermometer calibration
Carroll, Radford M.
1989-01-01
A method is provided for in-situ restoration of platinum resistance thermometers (PRT's) that have undergone surface oxide contamination and/or strain-related damage causing decalibration. The method, which may be automated using a programmed computer control arrangement, consists of applying a dc heating current to the resistive sensing element of the PRT of sufficient magnitude to heat the element to an annealing temperature and maintaining the temperature for a specified period to restore the element to a stress-free calibration condition. The process anneals the sensing element of the PRT without subjecting the entire PRT assembly to the annealing temperature and may be used in the periodic maintenance of installed PRT's.
Method for in-situ restoration of platinum resistance thermometer calibration
Carroll, R.M.
1987-10-23
A method is provided for in-situ restoration of platinum resistance thermometers (PRT's) that have undergone surface oxide contamination and/or stain-related damage causing decalibration. The method, which may be automated using a programmed computer control arrangement, consists of applying a dc heating current to the resistive sensing element of the PRT of sufficient magnitude to heat the element to an annealing temperature and maintaining the temperature for a specified period to restore the element to a stress-free calibration condition. The process anneals the sensing element of the PRT without subjecting the entire PRT assembly to the annealing temperature and may be used in the periodic maintenance of installed PRT's. 1 fig.
Itoh, Nobuyasu; Yamazaki, Taichi; Sato, Ayako; Numata, Masahiko; Takatsu, Akiko
2014-01-01
We examined the reliability of a certified reference material (CRM) for urea (NMIJ CRM 6006-a) as a calibrant for N, C, and H in elemental analyzers. Only the N content for this CRM is provided as an indicative value. To estimate the C and H contents of the urea CRM, we took into account the purity of the urea and the presence of other identified impurities. When we examined the use of various masses of the calibrant (0.2 to 2 mg), we unexpectedly observed low signal intensities for small masses of H and N, but these plateaued at about 2 mg. We therefore analyzed four amino acid CRMs and four food CRMs on a 2-mg scale with the urea CRM as the calibrant. For the amino acid CRMs, the differences in the analytical and theoretical contents (≤0.0026 kg/kg) were acceptable with good repeatability (≤0.0013 kg/kg in standard deviation; n = 4). For food CRMs, comparable repeatabilities to those obtained with amino acid CRMs (≤0.0025 kg/kg in standard deviation; n = 4) were obtained. The urea CRM can therefore be used as a reliable calibrant for C, H, and N in an elemental analyzer.
Sulfur Geochemical Analysis and Interpretation with ChemCam on the Curiosity Rover
NASA Astrophysics Data System (ADS)
Clegg, S. M.; Anderson, R. B.; Frydenvang, J.; Forni, O.; Newsom, H. E.; Blaney, D. L.; Maurice, S.; Wiens, R. C.
2017-12-01
The Curiosity rover has encountered many forms of sulfur including calcium sulfate veins [1], hydrated Mg sulfates, and Fe sulfates along the traverse through Gale crater. A new SO3 calibration model for the remote Laser-Induced Breakdown Spectroscopy (LIBS) technique used by the ChemCam instrument enables improved quantitative analysis of SO3, which has not been previously reported by ChemCam on a routine or quantitative basis. In this paper, the details of this new LIBS calibration model will be described and applied to many disparate Mars targets. Among them, Mavor contains a calcium sulfate vein surrounded by bedrock. In contrast, Jake M. is a float rock, Wernecke is a bedrock, Cumberland and Windjana are drill targets. In 2015 the ChemCam instrument team completed a re-calibration of major elements based on a significantly expanded set of >500 geochemical standards using the ChemCam testbed at Los Alamos National Laboratory [2]. In addition to these standards, the SO3 compositional range was recently extended with a series of doped samples containing various mixtures of Ca- and Mg-sulfate with basalt BHVO2. Spectra from these standards were processed per [4]. Calibration and Mars spectra were converted to peak-area-summed LIBS spectra that enables the SO3 calibration. These peak-area spectra were used to generate three overlapping partial least squares (PLS1) calibration sub-models as described by Anderson et al. [3, 5]. ChemCam analysis of Mavor involved a 3x3 raster in which locations 5 and 6 primarily probed Ca-sulfate material. The new ChemCam SO3 compositions for Mavor 5 and Mavor 6 are 48.6±1.2 and 50.3±1.2 wt% SO3, respectively. The LIBS spectra also recorded the presence of other elements that are likely responsible for the departure from pure Ca-sulfate chemistry. On the low-abundance side, the remaining 7 Mavor locations, Jake M., Cumberland, Windjana, and Wernecke all contain much lower SO3, between 1.4±0.5 wt% and 2.3±0.3 wt% SO3. [1] Nachon et al. J. Geophys. Res. Planets, doi:10.1002/2013JE004588. [2] Clegg et al. Spectrochimica Acta B, 2017, [3] Anderson et al. Spectrochimica Acta B, 2017. [4] Wiens et al. Spectrochimica Acta B, 2013, 82, 1-17, [5] Anderson et al. 3rd Planetary Data Workshop, abst. 7061, 2017.
Data Processing for Atmospheric Phase Interferometers
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Nessel, James A.; Morabito, David D.
2009-01-01
This paper presents a detailed discussion of calibration procedures used to analyze data recorded from a two-element atmospheric phase interferometer (API) deployed at Goldstone, California. In addition, we describe the data products derived from those measurements that can be used for site intercomparison and atmospheric modeling. Simulated data is used to demonstrate the effectiveness of the proposed algorithm and as a means for validating our procedure. A study of the effect of block size filtering is presented to justify our process for isolating atmospheric fluctuation phenomena from other system-induced effects (e.g., satellite motion, thermal drift). A simulated 24 hr interferometer phase data time series is analyzed to illustrate the step-by-step calibration procedure and desired data products.
Aligning physical elements with persons' attitude: an approach using Rasch measurement theory
NASA Astrophysics Data System (ADS)
Camargo, F. R.; Henson, B.
2013-09-01
Affective engineering uses mathematical models to convert the information obtained from persons' attitude to physical elements into an ergonomic design. However, applications in the domain have not in many cases met measurement assumptions. This paper proposes a novel approach based on Rasch measurement theory to overcome the problem. The research demonstrates that if data fit the model, further variables can be added to a scale. An empirical study was designed to determine the range of compliance where consumers could obtain an impression of a moisturizer cream when touching some product containers. Persons, variables and stimulus objects were parameterised independently on a linear continuum. The results showed that a calibrated scale preserves comparability although incorporating further variables.
The Submillimeter Wave Electron Cyclotron Emission Diagnostic for the Alcator C-Mod Tokamak.
NASA Astrophysics Data System (ADS)
Hsu, Thomas C.
This thesis describes the engineering design, construction, and operation of a high spatial resolution submillimeter wave diagnostic for electron temperature measurements on Alcator C-Mod. Alcator C-Mod is a high performance compact tokamak capable of producing diverted, shaped plasmas with a major radius of 0.67 meters, minor radius of 0.21 centimeters, plasma current of 3 MA. The maximum toroidal field is 9 Tesla on the magnetic axis. The ECE diagnostic includes three primary components: a 10.8 meter quasioptical transmission line, a rapid scanning Michelson interferometer, and a vacuum compatible calibration source. Due to the compact size and high field of the tokamak the ECE system was designed to have a spectral range from 100 to 1000 GHz with frequency resolution of 5 GHz and spatial resolution of one centimeter. The beamline uses all reflecting optical elements including two off-axis parabolic mirrors with diameters of 20 cm. and focal lengths of 2.7 meters. Techniques are presented for grinding and finishing the mirrors to sufficient surface quality to permit optical alignment of the system. Measurements of the surface figure confirm the design goal of 1/4 wavelength accuracy at 1000 GHz. Extensive broadband tests of the spatial resolution of the ECE system are compared to a fundamental mode Gaussian beam model, a three dimensional vector diffraction model, and a geometric optics model. The Michelson interferometer is a rapid scanning polarization instrument which has an apodized frequency resolution of 5 GHz and a minimum scan period of 7.5 milliseconds. The novel features of this instrument include the use of precision linear bearings to stabilize the moving mirror and active counterbalancing to reduce vibration. Beam collimation within the instrument is done with off-axis parabolic mirrors. The Michelson also includes a 2-50 mm variable aperture and two signal attenuators constructed from crossed wire grid polarizers. To make full use of the advantages of an evacuated optical path a dual element in-situ calibration source was designed and constructed. The calibration source operates as a thermal blackbody at temperatures from 77K to 373K and base pressures down to 10^{-7} torr. The top element of the source serves as a room temperature reference while the lower element can be heated or cooled by the circulation of an appropriate fluid through the internal heat transfer tubes. The submillimeter absorbing bodies of both elements are made from arrays of knife edge tiles cast from thermally conductive, alumina filled epoxy. A boundary element heat transfer model of the tiles was constructed which indicates temperature uniformity within 1.5 percent. Operation during the 1993 startup of Alcator C -Mod demonstrates the excellent potential of the new instruments. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253 -1690.) (Abstract shortened by UMI.).
Calibration of the MSL/ChemCam/LIBS Remote Sensing Composition Instrument
NASA Technical Reports Server (NTRS)
Wiens, R. C.; Maurice S.; Bender, S.; Barraclough, B. L.; Cousin, A.; Forni, O.; Ollila, A.; Newsom, H.; Vaniman, D.; Clegg, S.;
2011-01-01
The ChemCam instrument suite on board the 2011 Mars Science Laboratory (MSL) Rover, Curiosity, will provide remote-sensing composition information for rock and soil samples within seven meters of the rover using a laser-induced breakdown spectroscopy (LIBS) system, and will provide context imaging with a resolution of 0.10 mradians using the remote micro-imager (RMI) camera. The high resolution is needed to image the small analysis footprint of the LIBS system, at 0.2-0.6 mm diameter. This fine scale analytical capability will enable remote probing of stratigraphic layers or other small features the size of "blueberries" or smaller. ChemCam is intended for rapid survey analyses within 7 m of the rover, with each measurement taking less than 6 minutes. Repeated laser pulses remove dust coatings and provide depth profiles through weathering layers, allowing detailed investigation of rock varnish features as well as analysis of the underlying pristine rock composition. The LIBS technique uses brief laser pulses greater than 10 MW/square mm to ablate and electrically excite material from the sample of interest. The plasma emits photons with wavelengths characteristic of the elements present in the material, permitting detection and quantification of nearly all elements, including the light elements H, Li, Be, B, C, N, O. ChemCam LIBS projects 14 mJ of 1067 nm photons on target and covers a spectral range of 240-850 nm with resolutions between 0.15 and 0.60 nm FWHM. The Nd:KGW laser is passively cooled and is tuned to provide maximum power output from -10 to 0 C, though it can operate at 20% degraded energy output at room temperature. Preliminary calibrations were carried out on the flight model (FM) in 2008. However, the detectors were replaced in 2009, and final calibrations occurred in April-June, 2010. This presentation describes the LIBS calibration and characterization procedures and results, and details plans for final analyses during rover system thermal testing, planned for early March.
Application and Analysis of Measurement Model for Calibrating Spatial Shear Surface in Triaxial Test
NASA Astrophysics Data System (ADS)
Zhang, Zhihua; Qiu, Hongsheng; Zhang, Xiedong; Zhang, Hang
2017-12-01
Discrete element method has great advantages in simulating the contacts, fractures, large displacement and deformation between particles. In order to analyze the spatial distribution of the shear surface in the three-dimensional triaxial test, a measurement model is inserted in the numerical triaxial model which is generated by weighted average assembling method. Due to the non-visibility of internal shear surface in laboratory, it is largely insufficient to judge the trend of internal shear surface only based on the superficial cracks of sheared sample, therefore, the measurement model is introduced. The trend of the internal shear zone is analyzed according to the variations of porosity, coordination number and volumetric strain in each layer. It shows that as a case study on confining stress of 0.8 MPa, the spatial shear surface is calibrated with the results of the rotated particle distribution and the theoretical value with the specific characteristics of the increase of porosity, the decrease of coordination number, and the increase of volumetric strain, which represents the measurement model used in three-dimensional model is applicable.
NASA Astrophysics Data System (ADS)
Verbus, J. R.; Rhyne, C. A.; Malling, D. C.; Genecov, M.; Ghosh, S.; Moskowitz, A. G.; Chan, S.; Chapman, J. J.; de Viveiros, L.; Faham, C. H.; Fiorucci, S.; Huang, D. Q.; Pangilinan, M.; Taylor, W. C.; Gaitskell, R. J.
2017-04-01
We propose a new technique for the calibration of nuclear recoils in large noble element dual-phase time projection chambers used to search for WIMP dark matter in the local galactic halo. This technique provides an in situ measurement of the low-energy nuclear recoil response of the target media using the measured scattering angle between multiple neutron interactions within the detector volume. The low-energy reach and reduced systematics of this calibration have particular significance for the low-mass WIMP sensitivity of several leading dark matter experiments. Multiple strategies for improving this calibration technique are discussed, including the creation of a new type of quasi-monoenergetic neutron source with a minimum possible peak energy of 272 keV. We report results from a time-of-flight-based measurement of the neutron energy spectrum produced by an Adelphi Technology, Inc. DD108 neutron generator, confirming its suitability for the proposed nuclear recoil calibration.
NASA Technical Reports Server (NTRS)
Smith, N. J. (Inventor)
1968-01-01
A pressure switch assembly comprising a body portion and a switch mechanism having a contact element operable between opposite limit positions is described. A diaphragm chamber is provided in the body portion which mounts therein a system diaphragm and a calibration diaphragm which are of generally the same configuration and having outer faces conforming to the inner and outer walls of the diaphragm chamber. The space between the inner faces of the diaphragms defines a first chamber section and the space between the outer face of one of the diaphragms and the outer wall of the diaphragm chamber defines a second chamber section. The body portion includes a system pressure port communicating with one of the chamber sections and a calibration pressure port communicating with the other chamber section. An actuator connected to one of the diaphragms and the contact element of the switch operates upon pressure change in the diaphragm sections to move said contact element between limit positions.
NASA Astrophysics Data System (ADS)
Toman, Blaza; Nelson, Michael A.; Bedner, Mary
2017-06-01
Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).
A non-orthogonal material model of woven composites in the preforming process
Zhang, Weizhao; Ren, Huaqing; Liang, Biao; ...
2017-05-04
Woven composites are considered as a promising material choice for lightweight applications. An improved non-orthogonal material model that can decouple the strong tension and weak shear behaviour of the woven composite under large shear deformation is proposed for simulating the preforming of woven composites. The tension, shear and compression moduli in the model are calibrated using the tension, bias-extension and bending experiments, respectively. The interaction between the composite layers is characterized by a sliding test. The newly developed material model is implemented in the commercial finite element software LS-DYNA® and validated by a double dome study.
Thermal Modeling of Al-Al and Al-Steel Friction Stir Spot Welding
NASA Astrophysics Data System (ADS)
Jedrasiak, P.; Shercliff, H. R.; Reilly, A.; McShane, G. J.; Chen, Y. C.; Wang, L.; Robson, J.; Prangnell, P.
2016-09-01
This paper presents a finite element thermal model for similar and dissimilar alloy friction stir spot welding (FSSW). The model is calibrated and validated using instrumented lap joints in Al-Al and Al-Fe automotive sheet alloys. The model successfully predicts the thermal histories for a range of process conditions. The resulting temperature histories are used to predict the growth of intermetallic phases at the interface in Al-Fe welds. Temperature predictions were used to study the evolution of hardness of a precipitation-hardened aluminum alloy during post-weld aging after FSSW.
In situ calibration of inductively coupled plasma-atomic emission and mass spectroscopy
Braymen, S.D.
1996-06-11
A method and apparatus are disclosed for in situ addition calibration of an inductively coupled plasma atomic emission spectrometer or mass spectrometer using a precision gas metering valve to introduce a volatile calibration gas of an element of interest directly into an aerosol particle stream. The present in situ calibration technique is suitable for various remote, on-site sampling systems such as laser ablation or nebulization. 5 figs.
Uncertainty quantification in capacitive RF MEMS switches
NASA Astrophysics Data System (ADS)
Pax, Benjamin J.
Development of radio frequency micro electrical-mechanical systems (RF MEMS) has led to novel approaches to implement electrical circuitry. The introduction of capacitive MEMS switches, in particular, has shown promise in low-loss, low-power devices. However, the promise of MEMS switches has not yet been completely realized. RF-MEMS switches are known to fail after only a few months of operation, and nominally similar designs show wide variability in lifetime. Modeling switch operation using nominal or as-designed parameters cannot predict the statistical spread in the number of cycles to failure, and probabilistic methods are necessary. A Bayesian framework for calibration, validation and prediction offers an integrated approach to quantifying the uncertainty in predictions of MEMS switch performance. The objective of this thesis is to use the Bayesian framework to predict the creep-related deflection of the PRISM RF-MEMS switch over several thousand hours of operation. The PRISM switch used in this thesis is the focus of research at Purdue's PRISM center, and is a capacitive contacting RF-MEMS switch. It employs a fixed-fixed nickel membrane which is electrostatically actuated by applying voltage between the membrane and a pull-down electrode. Creep plays a central role in the reliability of this switch. The focus of this thesis is on the creep model, which is calibrated against experimental data measured for a frog-leg varactor fabricated and characterized at Purdue University. Creep plasticity is modeled using plate element theory with electrostatic forces being generated using either parallel plate approximations where appropriate, or solving for the full 3D potential field. For the latter, structure-electrostatics interaction is determined through immersed boundary method. A probabilistic framework using generalized polynomial chaos (gPC) is used to create surrogate models to mitigate the costly full physics simulations, and Bayesian calibration and forward propagation of uncertainty are performed using this surrogate model. The first step in the analysis is Bayesian calibration of the creep related parameters. A computational model of the frog-leg varactor is created, and the computed creep deflection of the device over 800 hours is used to generate a surrogate model using a polynomial chaos expansion in Hermite polynomials. Parameters related to the creep phenomenon are calibrated using Bayesian calibration with experimental deflection data from the frog-leg device. The calibrated input distributions are subsequently propagated through a surrogate gPC model for the PRISM MEMS switch to produce probability density functions of the maximum membrane deflection of the membrane over several thousand hours. The assumptions related to the Bayesian calibration and forward propagation are analyzed to determine the sensitivity to these assumptions of the calibrated input distributions and propagated output distributions of the PRISM device. The work is an early step in understanding the role of geometric variability, model uncertainty, numerical errors and experimental uncertainties in the long-term performance of RF-MEMS.
8s, a numerical simulator of the challenging optical calibration of the E-ELT adaptive mirror M4
NASA Astrophysics Data System (ADS)
Briguglio, Runa; Pariani, Giorgio; Xompero, Marco; Riccardi, Armando; Tintori, Matteo; Lazzarini, Paolo; Spanò, Paolo
2016-07-01
8s stands for Optical Test TOwer Simulator (with 8 read as in italian 'otto'): it is a simulation tool for the optical calibration of the E-ELT deformable mirror M4 on its test facility. It has been developed to identify possible criticalities in the procedure, evaluate the solutions and estimate the sensitivity to environmental noise. The simulation system is composed by the finite elements model of the tower, the analytic influence functions of the actuators, the ray tracing propagation of the laser beam through the optical surfaces. The tool delivers simulated phasemaps of M4, associated with the current system status: actuator commands, optics alignment and position, beam vignetting, bench temperature and vibrations. It is possible to simulate a single step of the optical test of M4 by changing the system parameters according to a calibration procedure and collect the associated phasemap for performance evaluation. In this paper we will describe the simulation package and outline the proposed calibration procedure of M4.
NASA Astrophysics Data System (ADS)
Shih, D.; Yeh, G.
2009-12-01
This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.
NASA Astrophysics Data System (ADS)
Bolis, A.; Cantwell, C. D.; Moxey, D.; Serson, D.; Sherwin, S. J.
2016-09-01
A hybrid parallelisation technique for distributed memory systems is investigated for a coupled Fourier-spectral/hp element discretisation of domains characterised by geometric homogeneity in one or more directions. The performance of the approach is mathematically modelled in terms of operation count and communication costs for identifying the most efficient parameter choices. The model is calibrated to target a specific hardware platform after which it is shown to accurately predict the performance in the hybrid regime. The method is applied to modelling turbulent flow using the incompressible Navier-Stokes equations in an axisymmetric pipe and square channel. The hybrid method extends the practical limitations of the discretisation, allowing greater parallelism and reduced wall times. Performance is shown to continue to scale when both parallelisation strategies are used.
NASA Technical Reports Server (NTRS)
Malina, Roger F.; Jelinsky, Patrick; Bowyer, Stuart
1986-01-01
The calibration facilities and techniques for the Extreme Ultraviolet Explorer (EUVE) from 44 to 2500 A are described. Key elements include newly designed radiation sources and a collimated monochromatic EUV beam. Sample results for the calibration of the EUVE filters, detectors, gratings, collimators, and optics are summarized.
Zhang, Man; Zhou, Zhuhuang; Wu, Shuicai; Lin, Lan; Gao, Hongjian; Feng, Yusheng
2015-12-21
This study aims at improving the accuracy of temperature simulation for temperature-controlled radio frequency ablation (RFA). We proposed a new voltage-calibration method in the simulation and investigated the feasibility of a hyperbolic bioheat equation (HBE) in the RFA simulation with longer durations and higher power. A total of 40 RFA experiments was conducted in a liver-mimicking phantom. Four mathematical models with multipolar electrodes were developed by the finite element method in COMSOL software: HBE with/without voltage calibration, and the Pennes bioheat equation (PBE) with/without voltage calibration. The temperature-varied voltage calibration used in the simulation was calculated from an experimental power output and temperature-dependent resistance of liver tissue. We employed the HBE in simulation by considering the delay time τ of 16 s. First, for simulations by each kind of bioheat equation (PBE or HBE), we compared the differences between the temperature-varied voltage-calibration and the fixed-voltage values used in the simulations. Then, the comparisons were conducted between the PBE and the HBE in the simulations with temperature-varied voltage calibration. We verified the simulation results by experimental temperature measurements on nine specific points of the tissue phantom. The results showed that: (1) the proposed voltage-calibration method improved the simulation accuracy of temperature-controlled RFA for both the PBE and the HBE, and (2) for temperature-controlled RFA simulation with the temperature-varied voltage calibration, the HBE method was 0.55 °C more accurate than the PBE method. The proposed temperature-varied voltage calibration may be useful in temperature field simulations of temperature-controlled RFA. Besides, the HBE may be used as an alternative in the simulation of long-duration high-power RFA.
Williams, Brent A; Agarwal, Shikhar
2018-02-23
Prediction models such as the Seattle Heart Failure Model (SHFM) can help guide management of heart failure (HF) patients, but the SHFM has not been validated in the office environment. This retrospective cohort study assessed the predictive performance of the SHFM among patients with new or pre-existing HF in the context of an office visit.Methods and Results:SHFM elements were ascertained through electronic medical records at an office visit. The primary outcome was all-cause mortality. A "warranty period" for the baseline SHFM risk estimate was sought by examining predictive performance over time through a series of landmark analyses. Discrimination and calibration were estimated according to the proposed warranty period. Low- and high-risk thresholds were proposed based on the distribution of SHFM estimates. Among 26,851 HF patients, 14,380 (54%) died over a mean 4.7-year follow-up period. The SHFM lost predictive performance over time, with C=0.69 and C<0.65 within 3 and beyond 12 months from baseline respectively. The diminishing predictive value was attributed to modifiable SHFM elements. Discrimination (C=0.66) and calibration for 12-month mortality were acceptable. A low-risk threshold of ∼5% mortality risk within 12 months reflects the 10% of HF patients in the office setting with the lowest risk. The SHFM has utility in the office environment.
An IEEE 1451.1 Architecture for ISHM Applications
NASA Technical Reports Server (NTRS)
Morris, Jon A.; Turowski, Mark; Schmalzel, John L.; Figueroa, Jorge F.
2007-01-01
The IEEE 1451.1 Standard for a Smart Transducer Interface defines a common network information model for connecting and managing smart elements in control and data acquisition networks using network-capable application processors (NCAPs). The Standard is a network-neutral design model that is easily ported across operating systems and physical networks for implementing complex acquisition and control applications by simply plugging in the appropriate network level drivers. To simplify configuration and tracking of transducer and actuator details, the family of 1451 standards defines a Transducer Electronic Data Sheet (TEDS) that is associated with each physical element. The TEDS contains all of the pertinent information about the physical operations of a transducer (such as operating regions, calibration tables, and manufacturer information), which the NCAP uses to configure the system to support a specific transducer. The Integrated Systems Health Management (ISHM) group at NASA's John C. Stennis Space Center (SSC) has been developing an ISHM architecture that utilizes IEEE 1451.1 as the primary configuration and data acquisition mechanism for managing and collecting information from a network of distributed intelligent sensing elements. This work has involved collaboration with other NASA centers, universities and aerospace industries to develop IEEE 1451.1 compliant sensors and interfaces tailored to support health assessment of complex systems. This paper and presentation describe the development and implementation of an interface for the configuration, management and communication of data, information and knowledge generated by a distributed system of IEEE 1451.1 intelligent elements monitoring a rocket engine test system. In this context, an intelligent element is defined as one incorporating support for the IEEE 1451.x standards and additional ISHM functions. Our implementation supports real-time collection of both measurement data (raw ADC counts and converted engineering units) and health statistics produced by each intelligent element. The handling of configuration, calibration and health information is automated by using the TEDS in combination with other electronic data sheets extensions to convey health parameters. By integrating the IEEE 1451.1 Standard for a Smart Transducer Interface with ISHM technologies, each element within a complex system becomes a highly flexible computation engine capable of self-validation and performing other measures of the quality of information it is producing.
Dunning, C.P.; Feinstein, D.T.
2004-01-01
A single-layer, steady-state analytic element model was constructed to simulate shallow ground-water flow in the Menomonee Valley, an old industrial center southwest of downtown Milwaukee, Wisconsin. Project objectives were to develop an understanding of the shallow ground-water flow system and identify primary receptors of recharge to the valley. The analytic element model simulates flow in a 18.3 m (60 ft) thick layer of estuarine and alluvial sediments and man-made fill that comprises the shallow aquifer across the valley. The thin, laterally extensive nature of the shallow aquifer suggests horizontal-flow predominates, thus the system can appropriately be modeled with the Dupuit-Forchheimer approximation in an analytic element model. The model was calibrated to the measured baseflow increase between two USGS gages on the Menomonee River, 90 head measurements taken in and around the valley during December 1999, and vertical gradients measured at five locations under the river and estuary in the valley. Recent construction of the Milwaukee Metropolitan Sewer District Inline Storage System (ISS) in the Silurian dolomite under the Menomonee Valley has locally lowered heads in the dolomite appreciably, below levels caused by historic pumping. The ISS is a regional hydraulic sink which removes water from the bedrock even during dry weather. The potential effect on flow directions in the shallow aquifer of dry-weather infiltration to the ISS was evaluated by adjusting the resistance of the line-sink strings representing the ISS in the model to allow infiltration from 0 to 100% of the reported 9,500 m3/d. The best fit to calibration targets was found between 60% (5,700 m3/d) and 80% (7,600 m3/d) of the reported dry-weather infiltration. At 60% infiltration, 65% of the recharge falling on the valley terminates at the ISS and 35% at the Menomonee River and estuary. At 80% infiltration, 73% of the recharge terminates at the ISS, and 27% at the river and estuary. Model simulations suggest that the ISS has an greater influence on the shallow ground-water flow in the eastern half of valley as compared to the western half. Preliminary three-dimensional simulations using the numerical MODFLOW code show good agreement with the single-layer simulation and supports its use in evaluating the shallow system. Copyright ASCE 2004.
A Study of Three Intrinsic Problems of the Classic Discrete Element Method Using Flat-Joint Model
NASA Astrophysics Data System (ADS)
Wu, Shunchuan; Xu, Xueliang
2016-05-01
Discrete element methods have been proven to offer a new avenue for obtaining the mechanics of geo-materials. The standard bonded-particle model (BPM), a classic discrete element method, has been applied to a wide range of problems related to rock and soil. However, three intrinsic problems are associated with using the standard BPM: (1) an unrealistically low unconfined compressive strength to tensile strength (UCS/TS) ratio, (2) an excessively low internal friction angle, and (3) a linear strength envelope, i.e., a low Hoek-Brown (HB) strength parameter m i . After summarizing the underlying reasons of these problems through analyzing previous researchers' work, flat-joint model (FJM) is used to calibrate Jinping marble and is found to closely match its macro-properties. A parametric study is carried out to systematically evaluate the micro-parameters' effect on these three macro-properties. The results indicate that (1) the UCS/TS ratio increases with the increasing average coordination number (CN) and bond cohesion to tensile strength ratio, but it first decreases and then increases with the increasing crack density (CD); (2) the HB strength parameter m i has positive relationships to the crack density (CD), bond cohesion to tensile strength ratio, and local friction angle, but a negative relationship to the average coordination number (CN); (3) the internal friction angle increases as the crack density (CD), bond cohesion to tensile strength ratio, and local friction angle increase; (4) the residual friction angle has little effect on these three macro-properties and mainly influences post-peak behavior. Finally, a new calibration procedure is developed, which not only addresses these three problems, but also considers the post-peak behavior.
Calibration under uncertainty for finite element models of masonry monuments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin
2010-02-01
Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, andmore » there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.« less
NEST: a comprehensive model for scintillation yield in liquid xenon
Szydagis, M.; Barry, N.; Kazkaz, K.; ...
2011-10-03
Here, a comprehensive model for explaining scintillation yield in liquid xenon is introduced. We unify various definitions of work function which abound in the literature and incorporate all available data on electron recoil scintillation yield. This results in a better understanding of electron recoil, and facilitates an improved description of nuclear recoil. An incident gamma energy range of O(1 keV) to O(1 MeV) and electric fields between 0 and O(10 kV/cm) are incorporated into this heuristic model. We show results from a Geant4 implementation, but because the model has a few free parameters, implementation in any simulation package should bemore » simple. We use a quasi-empirical approach, with an objective of improving detector calibrations and performance verification. The model will aid in the design and optimization of future detectors. This model is also easy to extend to other noble elements. In this paper we lay the foundation for an exhaustive simulation code which we call NEST (Noble Element Simulation Technique).« less
Montero-Chacón, Francisco; Cifuentes, Héctor; Medina, Fernando
2017-02-21
This work presents a lattice-particle model for the analysis of steel fiber-reinforced concrete (SFRC). In this approach, fibers are explicitly modeled and connected to the concrete matrix lattice via interface elements. The interface behavior was calibrated by means of pullout tests and a range for the bond properties is proposed. The model was validated with analytical and experimental results under uniaxial tension and compression, demonstrating the ability of the model to correctly describe the effect of fiber volume fraction and distribution on fracture properties of SFRC. The lattice-particle model was integrated into a hierarchical homogenization-based scheme in which macroscopic material parameters are obtained from mesoscale simulations. Moreover, a representative volume element (RVE) analysis was carried out and the results shows that such an RVE does exist in the post-peak regime and until localization takes place. Finally, the multiscale upscaling strategy was successfully validated with three-point bending tests.
Montero-Chacón, Francisco; Cifuentes, Héctor; Medina, Fernando
2017-01-01
This work presents a lattice–particle model for the analysis of steel fiber-reinforced concrete (SFRC). In this approach, fibers are explicitly modeled and connected to the concrete matrix lattice via interface elements. The interface behavior was calibrated by means of pullout tests and a range for the bond properties is proposed. The model was validated with analytical and experimental results under uniaxial tension and compression, demonstrating the ability of the model to correctly describe the effect of fiber volume fraction and distribution on fracture properties of SFRC. The lattice–particle model was integrated into a hierarchical homogenization-based scheme in which macroscopic material parameters are obtained from mesoscale simulations. Moreover, a representative volume element (RVE) analysis was carried out and the results shows that such an RVE does exist in the post-peak regime and until localization takes place. Finally, the multiscale upscaling strategy was successfully validated with three-point bending tests. PMID:28772568
Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface
Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A.; Castellanos-Ramos, Julián; Hidalgo-López, José A.
2016-01-01
Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor. PMID:26840321
Accuracy and Resolution Analysis of a Direct Resistive Sensor Array to FPGA Interface.
Oballe-Peinado, Óscar; Vidal-Verdú, Fernando; Sánchez-Durán, José A; Castellanos-Ramos, Julián; Hidalgo-López, José A
2016-02-01
Resistive sensor arrays are formed by a large number of individual sensors which are distributed in different ways. This paper proposes a direct connection between an FPGA and a resistive array distributed in M rows and N columns, without the need of analog-to-digital converters to obtain resistance values in the sensor and where the conditioning circuit is reduced to the use of a capacitor in each of the columns of the matrix. The circuit allows parallel measurements of the N resistors which form each of the rows of the array, eliminating the resistive crosstalk which is typical of these circuits. This is achieved by an addressing technique which does not require external elements to the FPGA. Although the typical resistive crosstalk between resistors which are measured simultaneously is eliminated, other elements that have an impact on the measurement of discharge times appear in the proposed architecture and, therefore, affect the uncertainty in resistance value measurements; these elements need to be studied. Finally, the performance of different calibration techniques is assessed experimentally on a discrete resistor array, obtaining for a new model of calibration, a maximum relative error of 0.066% in a range of resistor values which correspond to a tactile sensor.
NASA Astrophysics Data System (ADS)
Williams, Ammon Ned
The primary objective of this research is to develop an applied technology and provide an assessment for remotely measuring and analyzing the real time or near real time concentrations of used nuclear fuel (UNF) elements in electroreners (ER). Here, Laser-Induced Breakdown Spectroscopy (LIBS) in UNF pyroprocessing facilities was investigated. LIBS is an elemental analysis method, which is based on the emission from plasma generated by focusing a laser beam into the medium. This technology has been reported to be applicable in solids, liquids (includes molten metals), and gases for detecting elements of special nuclear materials. The advantages of applying the technology for pyroprocessing facilities are: (i) Rapid real-time elemental analysis; (ii) Direct detection of elements and impurities in the system with low limits of detection (LOD); and (iii) Little to no sample preparation is required. One important challenge to overcome is achieving reproducible spectral data over time while being able to accurately quantify fission products, rare earth elements, and actinides in the molten salt. Another important challenge is related to the accessibility of molten salt, which is heated in a heavily insulated, remotely operated furnace in a high radiation environment within an argon gas atmosphere. This dissertation aims to address these challenges and approaches in the following phases with their highlighted outcomes: 1. Aerosol-LIBS system design and aqueous testing: An aerosol-LIBS system was designed around a Collison nebulizer and tested using deionized water with Ce, Gd, and Nd concentrations from 100 ppm to 10,000 ppm. The average %RSD values between the sample repetitions were 4.4% and 3.8% for the Ce and Gd lines, respectively. The univariate calibration curve for Ce using the peak intensities of the Ce 418.660 nm line was recommended and had an R 2 value, LOD, and RMSECV of 0.994, 189 ppm, and 390 ppm, respectively. The recommended Gd calibration curve was generated using the peak areas of the Gd 409.861 nm line and had an R2, LOD, and RMSECV of 0.992, 316 ppm, and 421 ppm, respectively. The partial least squares (PLS) calibration curves yielded similar results with RMSECV of 406 ppm and 417 ppm for the Ce and Gd curves, respectively. 2. High temperature aerosol-LIBS system design and CeCl3 testing: The aerosol-LIBS system was transitioned to a high temperature and used to measure Ce in molten LiCl-KCl salt within a glovebox environment. The concentration range studied was from 0.1 wt% to 5 wt% Ce. Normalization was necessary due to signal degradation over time; however, with the normalization the %RSD values averaged 5% for the mid and upper concentrations studied. The best univariate calibration curve was generated using the peak areas of the Ce 418.660 nm line. The LOD for this line was 148 ppm with the RMSECV of 647 ppm. The PLS calibration curve was made using 7 latent variables (LV) and resulting in the RMSECV of 622 ppm. The LOD value was below the expected rare earth concentration within the ER. 3. Aerosol-LIBS testing using UCl3: Samples containing UCl 3 with concentrations ranging from 0.3 wt% to 5 wt% were measured. The spectral response in this range was linear. The best univariate calibration curves were generated using the peak areas of the U 367.01 nm line and had an R2 value of 0.9917. Here, the LOD was 647 ppm and the RMSECV was 2,290 ppm. The PLS model was substantially better with a RMSECV of 1,110 ppm. The LOD found here is below the expected U concentrations in the ER. The successful completion of this study has demonstrated the feasibility of using an aerosol-LIBS analytical technique to measure rare earth elements and actinides in the pyroprocessing salt.
NASA Astrophysics Data System (ADS)
Feltens, Joachim; Bellei, Gabriele; Springer, Tim; Kints, Mark V.; Zandbergen, René; Budnik, Frank; Schönemann, Erik
2018-06-01
Context: Calibration of radiometric tracking data for effects in the Earth atmosphere is a crucial element in the field of deep-space orbit determination (OD). The troposphere can induce propagation delays in the order of several meters, the ionosphere up to the meter level for X-band signals and up to tens of meters, in extreme cases, for L-band ones. The use of media calibrations based on Global Navigation Satellite Systems (GNSS) measurement data can improve the accuracy of the radiometric observations modelling and, as a consequence, the quality of orbit determination solutions. Aims: ESOC Flight Dynamics employs ranging, Doppler and delta-DOR (Delta-Differential One-Way Ranging) data for the orbit determination of interplanetary spacecraft. Currently, the media calibrations for troposphere and ionosphere are either computed based on empirical models or, under mission specific agreements, provided by external parties such as the Jet Propulsion Laboratory (JPL) in Pasadena, California. In order to become independent from external models and sources, decision fell to establish a new in-house internal service to create these media calibrations based on GNSS measurements recorded at the ESA tracking sites and processed in-house by the ESOC Navigation Support Office with comparable accuracy and quality. Methods: For its concept, the new service was designed to be as much as possible depending on own data and resources and as less as possible depending on external models and data. Dedicated robust and simple algorithms, well suited for operational use, were worked out for that task. This paper describes the approach built up to realize this new in-house internal media calibration service. Results: Test results collected during three months of running the new media calibrations in quasi-operational mode indicate that GNSS-based tropospheric corrections can remove systematic signatures from the Doppler observations and biases from the range ones. For the ionosphere, a direct way of verification was not possible due to non-availability of independent third party data for comparison. Nevertheless, the tests for ionospheric corrections showed also slight improvements in the tracking data modelling, but not to an extent as seen for the tropospheric corrections. Conclusions: The validation results confirmed that the new approach meets the requirements upon accuracy and operational use for the tropospheric part, while some improvement is still ongoing for the ionospheric one. Based on these test results, green light was given to put the new in-house service for media calibrations into full operational mode in April 2017.
Use of Cloud Computing to Calibrate a Highly Parameterized Model
NASA Astrophysics Data System (ADS)
Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.
2012-12-01
We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point multipliers were implemented so a higher resolution of spatial variability could be obtained where there was a higher density of observation data. Five geologic boundaries were modeled with a specified flux boundary condition and the transfer rate was used as an adjustable parameter for each of these boundaries. This parameterization resulted in 448 parameters for calibration. In the project planning stage it was estimated that the calibration might require as much 15,000 hours (1.7 years) of computing. In an effort to complete the calibration in a timely manner, the inversion was parallelized and implemented on as many as 250 computing nodes located on Amazon's EC2 servers. The results of the calibration provided a better fit to the data than previous efforts with homogenous parameters, and the highly parameterized approach facilitated subspace Monte Carlo analysis for predictive uncertainty. This scale of cloud computing is relatively new for the hydrogeology community and at the time of implementation it was believed to be the first implementation of FEFLOW model at this scale. While the experience provided several challenges, the implementation was successful and provides some valuable learning for future efforts.
Campbell, J Q; Coombs, D J; Rao, M; Rullkoetter, P J; Petrella, A J
2016-09-06
The purpose of this study was to seek broad verification and validation of human lumbar spine finite element models created using a previously published automated algorithm. The automated algorithm takes segmented CT scans of lumbar vertebrae, automatically identifies important landmarks and contact surfaces, and creates a finite element model. Mesh convergence was evaluated by examining changes in key output variables in response to mesh density. Semi-direct validation was performed by comparing experimental results for a single specimen to the automated finite element model results for that specimen with calibrated material properties from a prior study. Indirect validation was based on a comparison of results from automated finite element models of 18 individual specimens, all using one set of generalized material properties, to a range of data from the literature. A total of 216 simulations were run and compared to 186 experimental data ranges in all six primary bending modes up to 7.8Nm with follower loads up to 1000N. Mesh convergence results showed less than a 5% difference in key variables when the original mesh density was doubled. The semi-direct validation results showed that the automated method produced results comparable to manual finite element modeling methods. The indirect validation results showed a wide range of outcomes due to variations in the geometry alone. The studies showed that the automated models can be used to reliably evaluate lumbar spine biomechanics, specifically within our intended context of use: in pure bending modes, under relatively low non-injurious simulated in vivo loads, to predict torque rotation response, disc pressures, and facet forces. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1992-10-01
Three aspects of a polarimetric active radar calibrator (PARC) are treated: (1) experimental measurements of the magnitudes and phases of the scattering-matrix elements of a pair of PARCs operating at 1.25 and 5.3 GHz; (2) the design, construction, and performance evaluation of a PARC; and (3) the extension of the single-target-calibration technique (STCT) to a PARC. STCT has heretofore been limited to the use of reciprocal passive calibration devices, such as spheres and trihedral corner reflectors.
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1992-01-01
Three aspects of a polarimetric active radar calibrator (PARC) are treated: (1) experimental measurements of the magnitudes and phases of the scattering-matrix elements of a pair of PARCs operating at 1.25 and 5.3 GHz; (2) the design, construction, and performance evaluation of a PARC; and (3) the extension of the single-target-calibration technique (STCT) to a PARC. STCT has heretofore been limited to the use of reciprocal passive calibration devices, such as spheres and trihedral corner reflectors.
Delahanty, Ryan J; Kaufman, David; Jones, Spencer S
2018-06-01
Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death score has many attractive attributes that address the key barriers to adoption of ICU risk adjustment algorithms and performs comparably to existing human-intensive algorithms. Automated risk adjustment algorithms have the potential to obviate known barriers to adoption such as cost-prohibitive licensing fees and significant direct labor costs. Further evaluation is needed to ensure that the level of performance observed in this study could be achieved at independent sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staib, Michael
The GlueX experiment is a new experimental facility at Jefferson Lab in Newport News, VA. The experiment aims to map out the spectrum of hybrid mesons in the light quark sector. Measurements of the spin-density matrix elements in omega photoproduction are performed with a linear polarized photon beam on an unpolarized proton target, and presented in bins of Mandelstam t for beam energies of 8.4-9.0 GeV. The spin-density matrix elements are exclusively measured through two decays of the omega meson: omega -> pi^+ pi^- pi^0 and omega ->pi^0 gamma. A description of the experimental apparatus is presented. Several methods usedmore » in the calibration of the charged particle tracking system are described. These measurements greatly improve the world statistics in this energy range. These are the first results measured through the omega ->pi^0 gamma decay at this energy. Results are generally consistent with a theoretical model based on diffractive production with Pomeron and pseudoscalar exchange in the t-channel.« less
NASA Astrophysics Data System (ADS)
Wei, Haoyang
A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.
Geomechanical Simulation of Bayou Choctaw Strategic Petroleum Reserve - Model Calibration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Byoung
2017-02-01
A finite element numerical analysis model has been constructed that consists of a realistic mesh capturing the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multi - mechanism deformation ( M - D ) salt constitutive model using the daily data of actual wellhead pressure and oil - brine interface. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt is limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN aremore » used for the field baseline measurement. The structure factor, A 2 , and transient strain limit factor, K 0 , in the M - D constitutive model are used for the calibration. The A 2 value obtained experimentally from the BC salt and K 0 value of Waste Isolation Pilot Plant (WIPP) salt are used for the baseline values. T o adjust the magnitude of A 2 and K 0 , multiplication factors A2F and K0F are defined, respectively. The A2F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back fitting analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict past and future geomechanical behaviors of the salt dome, caverns, caprock , and interbed layers. The geological concerns issued in the BC site will be explained from this model in a follow - up report .« less
NASA Astrophysics Data System (ADS)
Mohajernia, Shiva; Mazare, Anca; Hwang, Imgon; Gaiaschi, Sofia; Chapon, Patrick; Hildebrand, Helga; Schmuki, Patrik
2018-06-01
In this work we study the depth composition of anodic TiO2 nanotube layers. We use elemental depth profiling with Glow Discharge Optical Emission Spectroscopy and calibrate the results of this technique with X-ray photoelectron spectroscopy (XPS) and energy dispersive spectroscopy (EDS). We establish optimized sputtering conditions for nanotubular structures using the pulsed RF mode, which causes minimized structural damage during the depth profiling of the nanotubular structures. This allows to obtain calibrated sputter rates that account for the nanotubular "porous" morphology. Most importantly, sputter-artifact free compositional profiles of these high aspect ratio 3D structures are obtained, as well as, in combination with SEM, elegant depth sectional imaging.
NASA Astrophysics Data System (ADS)
Dæhli, Lars Edvard Bryhni; Morin, David; Børvik, Tore; Hopperstad, Odd Sture
2017-10-01
Numerical unit cell models of an approximative representative volume element for a porous ductile solid are utilized to investigate differences in the mechanical response between a quadratic and a non-quadratic matrix yield surface. A Hershey equivalent stress measure with two distinct values of the yield surface exponent is employed as the matrix description. Results from the unit cell calculations are further used to calibrate a heuristic extension of the Gurson model which incorporates effects of the third deviatoric stress invariant. An assessment of the porous plasticity model reveals its ability to describe the unit cell response to some extent, however underestimating the effect of the Lode parameter for the lower triaxiality ratios imposed in this study when compared to unit cell simulations. Ductile failure predictions by means of finite element simulations using a unit cell model that resembles an imperfection band are then conducted to examine how the non-quadratic matrix yield surface influences the failure strain as compared to the quadratic matrix yield surface. Further, strain localization predictions based on bifurcation analyses and imperfection band analyses are undertaken using the calibrated porous plasticity model. These simulations are then compared to the unit cell calculations in order to elucidate the differences between the various modelling strategies. The current study reveals that strain localization analyses using an imperfection band model and a spatially discretized unit cell are in reasonable agreement, while the bifurcation analyses predict higher strain levels at localization. Imperfection band analyses are finally used to calculate failure loci for the quadratic and the non-quadratic matrix yield surface under a wide range of loading conditions. The underlying matrix yield surface is demonstrated to have a pronounced influence on the onset of strain localization.
FY 2016 Status Report on the Modeling of the M8 Calibration Series using MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Benjamin Allen; Ortensi, Javier; DeHart, Mark David
2016-09-01
This report provides a summary of the progress made towards validating the multi-physics reactor analysis application MAMMOTH using data from measurements performed at the Transient Reactor Test facility, TREAT. The work completed consists of a series of comparisons of TREAT element types (standard and control rod assemblies) in small geometries as well as slotted mini-cores to reference Monte Carlo simulations to ascertain the accuracy of cross section preparation techniques. After the successful completion of these smaller problems, a full core model of the half slotted core used in the M8 Calibration series was assembled. Full core MAMMOTH simulations were comparedmore » to Serpent reference calculations to assess the cross section preparation process for this larger configuration. As part of the validation process the M8 Calibration series included a steady state wire irradiation experiment and coupling factors for the experiment region. The shape of the power distribution obtained from the MAMMOTH simulation shows excellent agreement with the experiment. Larger differences were encountered in the calculation of the coupling factors, but there is also great uncertainty on how the experimental values were obtained. Future work will focus on resolving some of these differences.« less
Recommended Inorganic Chemicals for Calibration.
ERIC Educational Resources Information Center
Moody, John R.; And Others
1988-01-01
All analytical techniques depend on the use of calibration chemicals to relate analyte concentration to instrumental parameters. Discusses the preparation of standard solutions and provides a critical evaluation of available materials. Lists elements by group and discusses the purity and uses of each. (MVL)
Influence of Primary Gage Sensitivities on the Convergence of Balance Load Iterations
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2012-01-01
The connection between the convergence of wind tunnel balance load iterations and the existence of the primary gage sensitivities of a balance is discussed. First, basic elements of two load iteration equations that the iterative method uses in combination with results of a calibration data analysis for the prediction of balance loads are reviewed. Then, the connection between the primary gage sensitivities, the load format, the gage output format, and the convergence characteristics of the load iteration equation choices is investigated. A new criterion is also introduced that may be used to objectively determine if the primary gage sensitivity of a balance gage exists. Then, it is shown that both load iteration equations will converge as long as a suitable regression model is used for the analysis of the balance calibration data, the combined influence of non linear terms of the regression model is very small, and the primary gage sensitivities of all balance gages exist. The last requirement is fulfilled, e.g., if force balance calibration data is analyzed in force balance format. Finally, it is demonstrated that only one of the two load iteration equation choices, i.e., the iteration equation used by the primary load iteration method, converges if one or more primary gage sensitivities are missing. This situation may occur, e.g., if force balance calibration data is analyzed in direct read format using the original gage outputs. Data from the calibration of a six component force balance is used to illustrate the connection between the convergence of the load iteration equation choices and the existence of the primary gage sensitivities.
NASA Astrophysics Data System (ADS)
Catinari, Federico; Pierdicca, Alessio; Clementi, Francesco; Lenci, Stefano
2017-11-01
The results of an ambient-vibration based investigation conducted on the "Palazzo del Podesta" in Montelupone (Italy) is presented. The case study was damaged during the 20I6 Italian earthquakes that stroke the central part of the Italy. The assessment procedure includes full-scale ambient vibration testing, modal identification from ambient vibration responses, finite element modeling and dynamic-based identification of the uncertain structural parameters of the model. A very good match between theoretical and experimental modal parameters was reached and the model updating has been performed identifying some structural parameters.
NEXT Single String Integration Test Results
NASA Technical Reports Server (NTRS)
Soulas, George C.; Patterson, Michael J.; Pinero, Luis; Herman, Daniel A.; Snyder, Steven John
2010-01-01
As a critical part of NASA's Evolutionary Xenon Thruster (NEXT) test validation process, a single string integration test was performed on the NEXT ion propulsion system. The objectives of this test were to verify that an integrated system of major NEXT ion propulsion system elements meets project requirements, to demonstrate that the integrated system is functional across the entire power processor and xenon propellant management system input ranges, and to demonstrate to potential users that the NEXT propulsion system is ready for transition to flight. Propulsion system elements included in this system integration test were an engineering model ion thruster, an engineering model propellant management system, an engineering model power processor unit, and a digital control interface unit simulator that acted as a test console. Project requirements that were verified during this system integration test included individual element requirements ; integrated system requirements, and fault handling. This paper will present the results of these tests, which include: integrated ion propulsion system demonstrations of performance, functionality and fault handling; a thruster re-performance acceptance test to establish baseline performance: a risk-reduction PMS-thruster integration test: and propellant management system calibration checks.
Piezo-thermal Probe Array for High Throughput Applications
Gaitas, Angelo; French, Paddy
2012-01-01
Microcantilevers are used in a number of applications including atomic-force microscopy (AFM). In this work, deflection-sensing elements along with heating elements are integrated onto micromachined cantilever arrays to increase sensitivity, and reduce complexity and cost. An array of probes with 5–10 nm gold ultrathin film sensors on silicon substrates for high throughput scanning probe microscopy is developed. The deflection sensitivity is 0.2 ppm/nm. Plots of the change in resistance of the sensing element with displacement are used to calibrate the probes and determine probe contact with the substrate. Topographical scans demonstrate high throughput and nanometer resolution. The heating elements are calibrated and the thermal coefficient of resistance (TCR) is 655 ppm/K. The melting temperature of a material is measured by locally heating the material with the heating element of the cantilever while monitoring the bending with the deflection sensing element. The melting point value measured with this method is in close agreement with the reported value in literature. PMID:23641125
Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve
NASA Astrophysics Data System (ADS)
Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.
2018-03-01
A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil-brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement. The structure factor, A 2, and transient strain limit factor, K 0, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K 0, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K 0, multiplication factors A 2 F and K 0 F are defined, respectively. The A 2 F and K 0 F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. The geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.
Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve
Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.
2018-01-19
A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil–brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement.more » The structure factor, A 2, and transient strain limit factor, K o, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K o, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K0, multiplication factors A 2 F and K o F are defined, respectively. The A 2 F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. In conclusion, the geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.« less
Geomechanical Model Calibration Using Field Measurements for a Petroleum Reserve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Byoung Yoon; Sobolik, Steven R.; Herrick, Courtney G.
A finite element numerical analysis model has been constructed that consists of a mesh that effectively captures the geometries of Bayou Choctaw (BC) Strategic Petroleum Reserve (SPR) site and multimechanism deformation (M-D) salt constitutive model using the daily data of actual wellhead pressure and oil–brine interface location. The salt creep rate is not uniform in the salt dome, and the creep test data for BC salt are limited. Therefore, the model calibration is necessary to simulate the geomechanical behavior of the salt dome. The cavern volumetric closures of SPR caverns calculated from CAVEMAN are used as the field baseline measurement.more » The structure factor, A 2, and transient strain limit factor, K o, in the M-D constitutive model are used for the calibration. The value of A 2, obtained experimentally from BC salt, and the value of K o, obtained from Waste Isolation Pilot Plant salt, are used for the baseline values. To adjust the magnitude of A 2 and K0, multiplication factors A 2 F and K o F are defined, respectively. The A 2 F and K0F values of the salt dome and salt drawdown skins surrounding each SPR cavern have been determined through a number of back analyses. The cavern volumetric closures calculated from this model correspond to the predictions from CAVEMAN for six SPR caverns. Therefore, this model is able to predict behaviors of the salt dome, caverns, caprock, and interbed layers. In conclusion, the geotechnical concerns associated with the BC site from this analysis will be explained in a follow-up paper.« less
Elemental Composition of 433 Eros: New Calibration of the NEAR-Shoemaker XRS Data
NASA Technical Reports Server (NTRS)
Lim, Lucy F.; Nittler, Larry R.
2009-01-01
We present a new calibration of the elemental-abundance data for Asteroid 433 Fros taken by the X-ray spectrometer (XRS) aboard the NEAR-Shoemaker spacecraft. (NEAR is an acronym for "Near-Earth Asteroid Rendezvous,") Quintification of the asteroid surface elemental abundance ratios depends critically on accurate knowledge of the incident solar X-ray spectrum, which was monitored simultaneously with asteroid observations. Previously published results suffered from incompletely characterized systematic uncertainties due to an imperfect ground calibrations of the NEAR gas solar monitor. The solar monitor response function and associated uncertainties have now been characterized by cross-calibration of a large sample of NEAR solar monitor flight data against. contemporary broadband solar X-ray data from the Earth-orbiting GOES-8 (Geostationary Operational Environmental Satellite). The results have been used to analyze XRS spectra acquired from Eros during eight major solar flares (including three that have not previously been reported). The end product of this analysis is a revised set of Eros surface elemental abundance ratios with new error estimates that more accurately reflect the remaining uncertainties in the solar flare spectra: Mg/Si=.753 +0.078/-0.055, Al/Si=0.069 +/-0.055, S/Si=0.005+/-0.008, Ca/Si=0.060+0.023/-0.024, and Fe/Si= 1.578+0.338/-0.320. These revised abundance ratios are consitent within cited uncertainties with the results of Nittler et al. [Nittler, L.R., and 14 colleagues, 2001. Meteorit Planet. Sci 36, 1673-1695] and thus support the prior conclusions that 433 Eros has major-element composition simular to ordinary chondrites with the exception of a stong depletoin in sulfur, most likely caused by space weathering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philipona, J. R.; Dutton, Ellsworth G.; Stoffel, T.
2001-06-04
Because atmospheric longwave radiation is one of the most fundamental elements of an expected climate change, there has been a strong interest in improving measurements and model calculations in recent years. Important questions are how reliable and consistent are atmospheric longwave radiation measurements and calculations and what are the uncertainties? The First International Pyrgeometer and Absolute Sky-scanning Radiometer Comparison, which was held at the Atmospheric Radiation Measurement program's Souther Great Plains site in Oklahoma, answers these questions at least for midlatitude summer conditions and reflects the state of the art for atmospheric longwave radiation measurements and calculations. The 15 participatingmore » pyrgeometers were all calibration-traced standard instruments chosen from a broad international community. Two new chopped pyrgeometers also took part in the comparison. And absolute sky-scanning radiometer (ASR), which includes a pyroelectric detector and a reference blackbody source, was used for the first time as a reference standard instrument to field calibrate pyrgeometers during clear-sky nighttime measurements. Owner-provided and uniformly determined blackbody calibration factors were compared. Remarkable improvements and higher pyrgeometer precision were achieved with field calibration factors. Results of nighttime and daytime pyrgeometer precision and absolute uncertainty are presented for eight consecutive days of measurements, during which period downward longwave irradiance varied between 260 and 420 W m-2. Comparisons between pyrgeometers and the absolute ASR, the atmospheric emitted radiance interferometer, and radiative transfer models LBLRTM and MODTRAN show a surprisingly good agreement of <2 W m-2 for nighttime atmospheric longwave irradiance measurements and calculations.« less
Hirschvogel, Marc; Bassilious, Marina; Jagschies, Lasse; Wildhirt, Stephen M; Gee, Michael W
2016-10-15
A model for patient-specific cardiac mechanics simulation is introduced, incorporating a 3-dimensional finite element model of the ventricular part of the heart, which is coupled to a reduced-order 0-dimensional closed-loop vascular system, heart valve, and atrial chamber model. The ventricles are modeled by a nonlinear orthotropic passive material law. The electrical activation is mimicked by a prescribed parameterized active stress acting along a generic muscle fiber orientation. Our activation function is constructed such that the start of ventricular contraction and relaxation as well as the active stress curve's slope are parameterized. The imaging-based patient-specific ventricular model is prestressed to low end-diastolic pressure to account for the imaged, stressed configuration. Visco-elastic Robin boundary conditions are applied to the heart base and the epicardium to account for the embedding surrounding. We treat the 3D solid-0D fluid interaction as a strongly coupled monolithic problem, which is consistently linearized with respect to 3D solid and 0D fluid model variables to allow for a Newton-type solution procedure. The resulting coupled linear system of equations is solved iteratively in every Newton step using 2 × 2 physics-based block preconditioning. Furthermore, we present novel efficient strategies for calibrating active contractile and vascular resistance parameters to experimental left ventricular pressure and stroke volume data gained in porcine experiments. Two exemplary states of cardiovascular condition are considered, namely, after application of vasodilatory beta blockers (BETA) and after injection of vasoconstrictive phenylephrine (PHEN). The parameter calibration to the specific individual and cardiovascular state at hand is performed using a 2-stage nonlinear multilevel method that uses a low-fidelity heart model to compute a parameter correction for the high-fidelity model optimization problem. We discuss 2 different low-fidelity model choices with respect to their ability to augment the parameter optimization. Because the periodic state conditions on the model (active stress, vascular pressures, and fluxes) are a priori unknown and also dependent on the parameters to be calibrated (and vice versa), we perform parameter calibration and periodic state condition estimation simultaneously. After a couple of heart beats, the calibration algorithm converges to a settled, periodic state because of conservation of blood volume within the closed-loop circulatory system. The proposed model and multilevel calibration method are cost-efficient and allow for an efficient determination of a patient-specific in silico heart model that reproduces physiological observations very well. Such an individual and state accurate model is an important predictive tool in intervention planning, assist device engineering and other medical applications. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino
2018-06-01
We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.
Sutherland, J. C.
2016-07-20
Photoelastic modulators can alter the polarization state of a beam of ultraviolet, visible or infrared photons by means of periodic stress-induced differences in the refractive index of a transparent material that forms the optical element of the device and is isotropic in the absence of stress. Furthermore, they have found widespread application in instruments that characterize or alter the polarization state of a beam in fields as diverse as astronomy, structural biology, materials science and ultraviolet lithography for the manufacture of nano-scale integrated circuits. Measurement of circular dichroism, the differential absorption of left- and right circularly polarized light, and ofmore » strain-induced birefringence of optical components are major applications. Instruments using synchrotron radiation and photoelastic modulators with CaF 2 optical elements have extended circular dichroism measurements down to wavelengths of about 130 nm in the vacuum ultraviolet. Maintaining a constant phase shift between two orthogonal polarization states across a spectrum requires that the amplitude of the modulated stress be changed as a function of wavelength. For commercially available photoelastic modulators, the voltage that controls the amplitude of modulation required to produce a specified phase shift, which is a surrogate for the stress modulation amplitude, has been shown to be an approximately linear function of wavelength in the spectral region where the optical element is transparent. But, extrapolations of such straight lines cross zero voltage at a non-zero wavelength, not at zero-wavelength. For modulators with calcium fluoride and fused silica optical elements, the zero-crossing wavelength is always in the spectral region where the optical element of the modulator strongly absorbs the incident radiation, and at a wavelength less than the longest-wavelength apparent resonance deduced from experimental values of the refractive index fit to the Sellmeier equation. Using a model that relates the refractive indices of a stressed optical element to the refractive index of its unstressed state, an expression for the modulator control voltage was derived that closely fits the experimental data. Our result provides a theoretical rational for the apparently linear constant-phase programming voltage, and thus provides theoretical backing for the calibration procedure frequently used for these modulators. Lastly there are other factors that can influence the calibration of a photoelastic modulator, including temperature and atmospheric pressure, are discussed briefly.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, J. C.
Photoelastic modulators can alter the polarization state of a beam of ultraviolet, visible or infrared photons by means of periodic stress-induced differences in the refractive index of a transparent material that forms the optical element of the device and is isotropic in the absence of stress. Furthermore, they have found widespread application in instruments that characterize or alter the polarization state of a beam in fields as diverse as astronomy, structural biology, materials science and ultraviolet lithography for the manufacture of nano-scale integrated circuits. Measurement of circular dichroism, the differential absorption of left- and right circularly polarized light, and ofmore » strain-induced birefringence of optical components are major applications. Instruments using synchrotron radiation and photoelastic modulators with CaF 2 optical elements have extended circular dichroism measurements down to wavelengths of about 130 nm in the vacuum ultraviolet. Maintaining a constant phase shift between two orthogonal polarization states across a spectrum requires that the amplitude of the modulated stress be changed as a function of wavelength. For commercially available photoelastic modulators, the voltage that controls the amplitude of modulation required to produce a specified phase shift, which is a surrogate for the stress modulation amplitude, has been shown to be an approximately linear function of wavelength in the spectral region where the optical element is transparent. But, extrapolations of such straight lines cross zero voltage at a non-zero wavelength, not at zero-wavelength. For modulators with calcium fluoride and fused silica optical elements, the zero-crossing wavelength is always in the spectral region where the optical element of the modulator strongly absorbs the incident radiation, and at a wavelength less than the longest-wavelength apparent resonance deduced from experimental values of the refractive index fit to the Sellmeier equation. Using a model that relates the refractive indices of a stressed optical element to the refractive index of its unstressed state, an expression for the modulator control voltage was derived that closely fits the experimental data. Our result provides a theoretical rational for the apparently linear constant-phase programming voltage, and thus provides theoretical backing for the calibration procedure frequently used for these modulators. Lastly there are other factors that can influence the calibration of a photoelastic modulator, including temperature and atmospheric pressure, are discussed briefly.« less
Low-cost precision rotary index calibration
NASA Astrophysics Data System (ADS)
Ng, T. W.; Lim, T. S.
2005-08-01
The traditional method for calibrating angular indexing repeatability of rotary axes on machine tools and measuring equipment is with a precision polygon (usually 12 sided) and an autocollimator or angular interferometer. Such a setup is typically expensive. Here, we propose a far more cost-effective approach that uses just a laser, diffractive optical element, and CCD camera. We show that significantly high accuracies can be achieved for angular index calibration.
NASA Astrophysics Data System (ADS)
Mei, Yaguang; Cheng, Yuxin; Cheng, Shusen; Hao, Zhongqi; Guo, Lianbo; Li, Xiangyou; Zeng, Xiaoyan
2017-10-01
During the iron-making process in blast furnace, the Si content in liquid pig iron was usually used to evaluate the quality of liquid iron and thermal state of blast furnace. None effective method was found for rapid detecting the Si concentration of liquid iron. Laser-induced breakdown spectroscopy (LIBS) is a kind of atomic emission spectrometry technology based on laser ablation. Its obvious advantage is realizing rapid, in-situ, online analysis of element concentration in open air without sample pretreatment. The characteristics of Si in liquid iron were analyzed from the aspect of thermodynamic theory and metallurgical technology. The relationship between Si and C, Mn, S, P or other alloy elements were revealed based on thermodynamic calculation. Subsequently, LIBS was applied on rapid detection of Si of pig iron in this work. During LIBS detection process, several groups of standard pig iron samples were employed in this work to calibrate the Si content in pig iron. The calibration methods including linear, quadratic and cubic internal standard calibration, multivariate linear calibration and partial least squares (PLS) were compared with each other. It revealed that the PLS improved by normalization was the best calibration method for Si detection by LIBS.
NASA Astrophysics Data System (ADS)
Labrador, A. W.; Sollitt, L. S.; Cohen, C.; Cummings, A. C.; Leske, R. A.; Mason, G. M.; Mewaldt, R. A.; Stone, E. C.; von Rosenvinge, T. T.; Wiedenbeck, M. E.
2017-12-01
We have estimated mean high-energy ionic charge states of solar energetic particles (SEPs) using the Sollitt et al. (2008) method. The method applies to abundant elements (e.g. N, O, Ne, Mg, Si, and Fe) in SEP events at the energy ranges covered by the STEREO/LET instrument (e.g. 2.7-70 MeV/nuc for Fe) and the ACE/SIS instrument (e.g. 11-168 MeV/nuc for Fe). The method starts by fitting SEP time-intensity profiles during the decay phase of a given, large SEP event in order to obtain energy-dependent decay times. The mean charge state for each element is estimated from the relationship between the energy dependence of its decay times to that for selected calibration references. For simultaneous estimates among multiple elements, we assume a common rigidity dependence across all elements. Earlier calculations by Sollitt et al. incorporated helium time intensity profile fits with an assumed charge state of 2. Subsequent analysis dropped helium as a reference element, for simplicity, but we have recently reincorporated He for calibration, from either STEREO/LET or ACE/SIS data, combined with C as an additional reference element with an assumed mean charge state of 5.9. For this presentation, we will present validation of the reanalysis using data from the 8 March 2012 SEP event in ACE data and the 28 September 2012 event in STEREO data. We will also introduce additional low-energy He from publicly available ACE/ULEIS and STEREO/SIT data, which should further constrain the charge state calibration. Better charge state calibration could yield more robust convergence to physical solutions for SEP events for which this method has not previously yielded results. Therefore, we will also present analysis for additional SEP events from 2005 to 2017, and we will investigate conditions for which this method yields or does not yield charge states.
Calibrator device for the extrusion of cable coatings
NASA Astrophysics Data System (ADS)
Garbacz, Tomasz; Dulebová, Ľudmila; Spišák, Emil; Dulebová, Martina
2016-05-01
This paper presents selected results of theoretical and experimental research works on a new calibration device (calibrators) used to produce coatings of electric cables. The aim of this study is to present design solution calibration equipment and present a new calibration machine, which is an important element of the modernized technology extrusion lines for coating cables. As a result of the extrusion process of PVC modified with blowing agents, an extrudate in the form of an electrical cable was obtained. The conditions of the extrusion process were properly selected, which made it possible to obtain a product with solid external surface and cellular core.
Aquarius L-Band Radiometers Calibration Using Cold Sky Observations
NASA Technical Reports Server (NTRS)
Dinnat, Emmanuel P.; Le Vine, David M.; Piepmeier, Jeffrey R.; Brown, Shannon T.; Hong, Liang
2015-01-01
An important element in the calibration plan for the Aquarius radiometers is to look at the cold sky. This involves rotating the satellite 180 degrees from its nominal Earth viewing configuration to point the main beams at the celestial sky. At L-band, the cold sky provides a stable, well-characterized scene to be used as a calibration reference. This paper describes the cold sky calibration for Aquarius and how it is used as part of the absolute calibration. Cold sky observations helped establish the radiometer bias, by correcting for an error in the spillover lobe of the antenna pattern, and monitor the long-term radiometer drift.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
Cumulative rain fade statistics are used by space communications engineers to establish transmitter power and receiver sensitivities for systems operating under various geometries, climates, and radio frequencies. Space-diversity performance criteria are also of interest. This work represents a review, in which are examined the many elements involved in the employment of single nonattenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenuation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain-attenuation modeling. Suggestions are made for improving present methods.
Multi-Frequency Harmonics Technique for HIFU Tissue Treatment
NASA Astrophysics Data System (ADS)
Rybyanets, Andrey N.; Lugovaya, Maria A.; Rybyanets, Anastasia A.
2010-03-01
New technique for enhancing of tissue lysis and enlarging treatment volume during one HIFU sonification is proposed. The technique consists in simultaneous or alternative (at optimal repetition frequency) excitation of single element HIFU transducer on a frequencies corresponding to odd natural harmonics of piezoceramic element at ultrasound energy levels sufficient for producing cavitational, thermal or mechanical damage of fat cells at each of aforementioned frequencies. Calculation and FEM modeling of transducer vibrations and acoustic field patterns for different frequencies sets were performed. Acoustic pressure in focal plane was measured in water using calibrated hydrophone and 3D acoustic scanning system. In vitro experiments on different tissues and phantoms confirming the advantages of multifrequency harmonic method were performed.
Single-Vector Calibration of Wind-Tunnel Force Balances
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2003-01-01
An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzik, J.A.; Swenson, F.J.
We compare the thermodynamic and helioseismic properties of solar models evolved using three different equation of state (EOS) treatments: the Mihalas, D{umlt a}ppen & Hummer EOS tables (MHD); the latest Rogers, Swenson, & Iglesias EOS tables (OPAL), and a new analytical EOS (SIREFF) developed by Swenson {ital et al.} All of the models include diffusive settling of helium and heavier elements. The models use updated OPAL opacity tables based on the 1993 Grevesse & Noels solar element mixture, incorporating 21 elements instead of the 14 elements used for earlier tables. The properties of solar models that are evolved with themore » SIREFF EOS agree closely with those of models evolved using the OPAL or MHD tables. However, unlike the MHD or OPAL EOS tables, the SIREFF in-line EOS can readily account for variations in overall Z abundance and the element mixture resulting from nuclear processing and diffusive element settling. Accounting for Z abundance variations in the EOS has a small, but non-negligible, effect on model properties (e.g., pressure or squared sound speed), as much as 0.2{percent} at the solar center and in the convection zone. The OPAL and SIREFF equations of state include electron exchange, which produces models requiring a slightly higher initial helium abundance, and increases the convection zone depth compared to models using the MHD EOS. However, the updated OPAL opacities are as much as 5{percent} lower near the convection zone base, resulting in a small decrease in convection zone depth. The calculated low-degree nonadiabatic frequencies for all of the models agree with the observed frequencies to within a few microhertz (0.1{percent}). The SIREFF analytical calibrations are intended to work over a wide range of interior conditions found in stellar models of mass greater than 0.25M{sub {circle_dot}} and evolutionary states from pre-main-sequence through the asymptotic giant branch (AGB). It is significant that the SIREFF EOS produces solar models that both measure up to the stringent requirements imposed by solar oscillation observations and inferences, and are more versatile than EOS tables. {copyright} {ital 1997} {ital The American Astronomical Society}« less
Automated Calibration of Atmospheric Oxidized Mercury Measurements.
Lyman, Seth; Jones, Colleen; O'Neil, Trevor; Allen, Tanner; Miller, Matthieu; Gustin, Mae Sexauer; Pierce, Ashley M; Luke, Winston; Ren, Xinrong; Kelley, Paul
2016-12-06
The atmosphere is an important reservoir for mercury pollution, and understanding of oxidation processes is essential to elucidating the fate of atmospheric mercury. Several recent studies have shown that a low bias exists in a widely applied method for atmospheric oxidized mercury measurements. We developed an automated, permeation tube-based calibrator for elemental and oxidized mercury, and we integrated this calibrator with atmospheric mercury instrumentation (Tekran 2537/1130/1135 speciation systems) in Reno, Nevada and at Mauna Loa Observatory, Hawaii, U.S.A. While the calibrator has limitations, it was able to routinely inject stable amounts of HgCl 2 and HgBr 2 into atmospheric mercury measurement systems over periods of several months. In Reno, recovery of injected mercury compounds as gaseous oxidized mercury (as opposed to elemental mercury) decreased with increasing specific humidity, as has been shown in other studies, although this trend was not observed at Mauna Loa, likely due to differences in atmospheric chemistry at the two locations. Recovery of injected mercury compounds as oxidized mercury was greater in Mauna Loa than in Reno, and greater still for a cation-exchange membrane-based measurement system. These results show that routine calibration of atmospheric oxidized mercury measurements is both feasible and necessary.
Experimental determination and modelling of the swelling speed of a hydrogel polymer
NASA Astrophysics Data System (ADS)
Lenk, Sándor; Majoros, Tamás; Beleznai, Szabolcs; Ujhelyi, Ferenc; Péczeli, Imre; Karda, Zsolt; Barócsi, Attila
2018-03-01
When a hydrophilic intraocular lens material is immersed, its volume and mass start increase due to the diffusion of water (or isotonic saline solution) reaching a quasi-equilibrium in a time scale of several hours. Here, we present a combination of atomic force and confocal microscopy to measure the axial swelling speed of such polymers in distilled water. The measurements are used for the experimental verification of a simplistic finite element model developed for engineering applications in COMSOL environment. The model is calibrated with the temporal change of the sample mass. The swelling velocity is found to be inversely proportional to the square root of time.
Headridge, J B; Smith, D R
1972-07-01
An induction-heated graphite furnace, coupled to a Unicam SP 90 atomic-absorption spectrometer, is described for the direct determination of trace elements in metals and alloys. The furnace is capable of operation at temperatures up to 2400 degrees , and has been used to obtain calibration graphs for the determination of ppm quantities of bismuth in lead-base alloys, cast irons and stainless steels, and for the determination of cadmium at the ppm level in zinc-base alloys. Milligram samples of the alloys were atomized directly. Calibration graphs for the determination of the elements in solutions were obtained for comparison. The accuracy and precision of the determination are presented and discussed.
Calibrating GPS With TWSTFT For Accurate Time Transfer
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT
NASA Astrophysics Data System (ADS)
Becker, Johanna Sabine
2002-12-01
Inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) have been applied as the most important inorganic mass spectrometric techniques having multielemental capability for the characterization of solid samples in materials science. ICP-MS is used for the sensitive determination of trace and ultratrace elements in digested solutions of solid samples or of process chemicals (ultrapure water, acids and organic solutions) for the semiconductor industry with detection limits down to sub-picogram per liter levels. Whereas ICP-MS on solid samples (e.g. high-purity ceramics) sometimes requires time-consuming sample preparation for its application in materials science, and the risk of contamination is a serious drawback, a fast, direct determination of trace elements in solid materials without any sample preparation by LA-ICP-MS is possible. The detection limits for the direct analysis of solid samples by LA-ICP-MS have been determined for many elements down to the nanogram per gram range. A deterioration of detection limits was observed for elements where interferences with polyatomic ions occur. The inherent interference problem can often be solved by applying a double-focusing sector field mass spectrometer at higher mass resolution or by collision-induced reactions of polyatomic ions with a collision gas using an ICP-MS fitted with collision cell. The main problem of LA-ICP-MS is quantification if no suitable standard reference materials with a similar matrix composition are available. The calibration problem in LA-ICP-MS can be solved using on-line solution-based calibration, and different procedures, such as external calibration and standard addition, have been discussed with respect to their application in materials science. The application of isotope dilution in solution-based calibration for trace metal determination in small amounts of noble metals has been developed as a new calibration strategy. This review discusses new analytical developments and possible applications of ICP-MS and LA-ICP-MS for the quantitative determination of trace elements and in surface analysis for materials science.
Lee, J.K.; Bennett, C. S.
1981-01-01
A two-dimensional finite element surface water model was used to study the hydraulic impact of the proposed Interstate Route 326 crossing of the Congaree River near Columbia, SC. The finite element model was assessed as a potential operational tool for analyzing complex highway crossings and other modifications of river flood plains. Infrared aerial photography was used to define regions of homogeneous roughness in the flood plain. Finite element networks approximating flood plain topography were designed using elements of three roughness types. High water marks established during an 8-yr flood that occurred in October 1976 were used to calibrate the model. The maximum flood of record, an approximately 100-yr flood that occurred in August 1908, was modeled in three cases: dikes on the right bank, dikes on the left bank, and dikes on both banks. In each of the three cases, simulations were performed both without and with the proposed highway embankments in place. Detailed information was obtained about backwater effects upstream from the proposed highway embankments, changes in flow distribution resulting from the embankments, and local velocities in the bridge openings. On the basis of results from the model study, the South Carolina Department of Highways and Public Transportation changed the design of several bridge openings. A simulation incorporating the new design for the case with dikes on the left bank indicated that both velocities in the bridge openings and backwater were reduced. A major problem in applying the model was the difficulty in predicting the network detail necessary to avoid local errors caused by roughness discontinuities and large depth gradients. (Lantz-PTT)
Cagnazzo, M; Borio di Tigliole, A; Böck, H; Villa, M
2018-05-01
Aim of this work was the detection of fission products activity distribution along the axial dimension of irradiated fuel elements (FEs) at the TRIGA Mark II research reactor of the Technische Universität (TU) Wien. The activity distribution was measured by means of a customized fuel gamma scanning device, which includes a vertical lifting system to move the fuel rod along its vertical axis. For each investigated FE, a gamma spectrum measurement was performed along the vertical axis, with steps of 1 cm, in order to determine the axial distribution of the fission products. After the fuel elements underwent a relatively short cooling down period, different fission products were detected. The activity concentration was determined by calibrating the gamma detector with a standard calibration source of known activity and by MCNP6 simulations for the evaluation of self-absorption and geometric effects. Given the specific TRIGA fuel composition, a correction procedure is developed and used in this work for the measurement of the fission product Zr 95 . This measurement campaign is part of a more extended project aiming at the modelling of the TU Wien TRIGA reactor by means of different calculation codes (MCNP6, Serpent): the experimental results presented in this paper will be subsequently used for the benchmark of the models developed with the calculation codes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Experimental facility for testing nuclear instruments for planetary landing missions
NASA Astrophysics Data System (ADS)
Golovin, Dmitry; Mitrofanov, Igor; Litvak, Maxim; Kozyrev, Alexander; Sanin, Anton; Vostrukhin, Andrey
2017-04-01
The experimental facility for testing and calibration of nuclear planetology instruments has been built in the frame of JINR and Space Research Institute (Moscow) cooperation. The Martian soil model from silicate glass with dimensions 3.82 x 3.21 m and total weight near 30 tons has been assembled in the facility. The glass material was chosen for imitation of dry Martian regolith. The heterogeneous model has been proposed and developed to achieve the most possible similarity with Martian soil in part of the average elemental composition by adding layers of necessary materials, such as iron, aluminum, and chlorine. The presence of subsurface water ice is simulated by adding layers of polyethylene at different depths inside glass model assembly. Neutron generator was used as a neutron source to induce characteristic gamma rays for testing active neutron and gamma spectrometers to define elements composition of the model. The instrumentation was able to detect gamma lines attributed to H, O, Na, Mg, Al, Si, Cl, K, Ca and Fe. The identified elements compose up to 95 wt % of total mass of the planetary soil model. This results will be used for designing scientific instruments to performing experiments of active neutron and gamma ray spectroscopy on the surface of the planets during Russian and international missions Luna-Glob, Luna-Resource and ExoMars-2020.
Predicting tidal currents in San Francisco Bay using a spectral model
Burau, Jon R.; Cheng, Ralph T.
1988-01-01
This paper describes the formulation of a spectral (or frequency based) model which solves the linearized shallow water equations. To account for highly variable basin bathymetry, spectral solutions are obtained using the finite element method which allows the strategic placement of the computation points in the specific areas of interest or in areas where the gradients of the dependent variables are expected to be large. Model results are compared with data using simple statistics to judge overall model performance in the San Francisco Bay estuary. Once the model is calibrated and verified, prediction of the tides and tidal currents in San Francisco Bay is accomplished by applying astronomical tides (harmonic constants deduced from field data) at the prediction time along the model boundaries.
NASA Technical Reports Server (NTRS)
Mitchell, Alissa; Capon, Thomas; Guzek, Jeffrey; Hakun, Claef; Haney, Paul; Koca, Corina
2014-01-01
Calibration and testing of the instruments on the Integrated Science Instrument Module (ISIM) of the James Webb Space Telescope (JWST) is being performed by the use of a cryogenic, full-field, optical simulator that was constructed for this purpose. The Pupil Select Mechanism (PSM) assembly is one of several mechanisms and optical elements that compose the Optical Telescope Element SIMulator, or OSIM. The PSM allows for several optical elements to be inserted into the optical plane of OSIM, introducing a variety of aberrations, distortions, obscurations, and other calibration states into the pupil plane. The following discussion focuses on the details of the design evolution, analysis, build, and test of this mechanism along with the challenges associated with creating a sub arc-minute positioning mechanism operating in an extreme cryogenic environment. In addition, difficult challenges in the control system design will be discussed including the incorporation of closed-loop feedback control into a system that was designed to operate in an open-loop fashion.
NASA Technical Reports Server (NTRS)
Mitchell, Alissa; Capon, Thomas; Guzek, Jeffrey; Hakun, Claef; Haney, Paul; Koca, Corina
2014-01-01
Calibration and testing of the instruments on the Integrated Science Instrument Module (ISIM) of the James Webb Space Telescope (JWST) is being performed by the use of a cryogenic, full-field, optical simulator that was constructed for this purpose. The Pupil Select Mechanism (PSM) assembly is one of several mechanisms and optical elements that compose the Optical Telescope Element SIMulator, or OSIM. The PSM allows for several optical elements to be inserted into the optical plane of OSIM, introducing a variety of aberrations, distortions, obscurations, and other calibration states into the pupil plane. The following discussion focuses on the details of the design evolution, analysis, build, and test of this mechanism along with the challenges associated with creating a sub arc-minute positioning mechanism operating in an extreme cryogenic environment. In addition, difficult challenges in the control system design will be discussed including the incorporation of closed-loop feedback control into a system that was designed to operate in an open-loop fashion.
Guillong, M.; Hametner, K.; Reusser, E.; Wilson, S.A.; Gunther, D.
2005-01-01
New glass reference materials GSA-1G, GSC-1G, GSD-1G and GSE-1G have been characterised using a prototype solid state laser ablation system capable of producing wavelengths of 193 nm, 213 nm and 266 nm. This system allowed comparison of the effects of different laser wavelengths under nearly identical ablation and ICP operating conditions. The wavelengths 213 nm and 266 nm were also used at higher energy densities to evaluate the influence of energy density on quantitative analysis. In addition, the glass reference materials were analysed using commercially available 266 nm Nd:YAG and 193 nm ArF excimer lasers. Laser ablation analysis was carried out using both single spot and scanning mode ablation. Using laser ablation ICP-MS, concentrations of fifty-eight elements were determined with external calibration to the NIST SRM 610 glass reference material. Instead of applying the more common internal standardisation procedure, the total concentration of all element oxide concentrations was normalised to 100%. Major element concentrations were compared with those determined by electron microprobe. In addition to NIST SRM 610 for external calibration, USGS BCR-2G was used as a more closely matrix-matched reference material in order to compare the effect of matrix-matched and non matrix-matched calibration on quantitative analysis. The results show that the various laser wavelengths and energy densities applied produced similar results, with the exception of scanning mode ablation at 266 nm without matrix-matched calibration where deviations up to 60% from the average were found. However, results acquired using a scanning mode with a matrix-matched calibration agreed with results obtained by spot analysis. The increased abundance of large particles produced when using a scanning ablation mode with NIST SRM 610, is responsible for elemental fractionation effects caused by incomplete vaporisation of large particles in the ICP.
NASA Astrophysics Data System (ADS)
Dyar, M. D.; Nelms, M.; Breves, E. A.
2012-12-01
Laser-induced breakdown spectrometer (LIBS), as implemented on the ChemCam instrument on Mars Science Lab and the proposed New Frontiers SAGE mission to Venus, can analyze elements from H to Pb from up to 7m standoff. This study examines the capabilities of LIBS to analyze H, O, B, Be, and Li under conditions simulating Earth, the Moon, and Mars. Of these, H is a major constituent of clay minerals and a key indicator of the presence of water. Its abundance in terrestrial materials ranges from 0 ppm up to 10's of wt.% H2O in hydrated sulfates and clays, with prominent emission lines occurring ca. 656.4 nm. O is an important indicator of atmospheric and magmatic coevolution, and has lines ca. 615.8, 656.2, 777.6, and 844.8 nm. Unfortunately there are very few geological samples from which O has been directly measured, but stoichiometry suggests that O varies from ca. 0 wt.% in sulfides to 21% in ferberite, 32% in ilmenite, 42% in amphiboles, 53% in quartz, 63% in melanterite, and 71% in epsomite. Li (lines at 413.3, 460.4, and 670.9 nm in vacuum), B (412.3 nm), and Be (313.1 nm) are highly mobile elements and key indicators of interaction with water. Local atmospheric composition and pressure significantly influence LIBS plasma intensity because the local atmosphere and the breakdown products from the atmospheric species interact with the ablated surface material in the plasma. Measurement of light elements with LIBS requires that spectra be acquired under conditions matching the remote environment. LIBS is critically dependent on the availability of well characterized, homogeneous reference materials that are closely matched in matrix (composition and structure) to the sample being studied. In modern geochemistry, analyses of most major, minor, and trace elements are routinely made. However, quantitative determination of light element concentrations in geological specimens still represents a major analytical challenge. Thus standards for which hydrogen, oxygen, and other light elements are directly measured are nearly nonexistent in the 1-2 g quantities needed for LIBS analyses. For this study, we have obtained two sample suites that provide calibrations needed for accurate analyses of H, O, B, Be, and Li in geological samples. The first suite of 11 samples was analyzed for oxygen by fast neutron activation analysis. The second suite includes 11 gem-quality minerals representing the major rock-forming species for B, Li, and Be-rich parageneses. Light elements were directly analyzed using a combination of EMPA, XRF, ion microprobe, uranium extraction, proton-induced gamma-ray emission (PIGE), and prompt gamma-ray neutron activation analysis (PGNAA). LIBS spectra were acquired at Mount Holyoke College under air, vacuum, and CO2 to simulate terrestrial, lunar, and martian environments. Spectra were then used to develop three separate calibration models (one for each environment), enabling LIBS characterization of light elements using multivariate analyses. Results show that when direct analyses of H, O, Li, B, and Be are used rather than LOI results, inferred, or indirectly calculated values, optimal root mean squared errors of prediction result. We are actively adding samples to these calibration suites, and we expect that prediction errors (accuracies) of <1wt% for these elements are possible.
Analytical modeling of structure-soil systems for lunar bases
NASA Technical Reports Server (NTRS)
Macari-Pasqualino, Jose Emir
1989-01-01
The study of the behavior of granular materials in a reduced gravity environment and under low effective stresses became a subject of great interest in the mid 1960's when NASA's Surveyor missions to the Moon began the first extraterrestrial investigation and it was found that Lunar soils exhibited properties quite unlike those on Earth. This subject gained interest during the years of the Apollo missions and more recently due to NASA's plans for future exploration and colonization of Moon and Mars. It has since been clear that a good understanding of the mechanical properties of granular materials under reduced gravity and at low effective stress levels is of paramount importance for the design and construction of surface and buried structures on these bodies. In order to achieve such an understanding it is desirable to develop a set of constitutive equations that describes the response of such materials as they are subjected to tractions and displacements. This presentation examines issues associated with conducting experiments on highly nonlinear granular materials under high and low effective stresses. The friction and dilatancy properties which affect the behavior of granular soils with low cohesion values are assessed. In order to simulate the highly nonlinear strength and stress-strain behavior of soils at low as well as high effective stresses, a versatile isotropic, pressure sensitive, third stress invariant dependent, cone-cap elasto-plastic constitutive model was proposed. The integration of the constitutive relations is performed via a fully implicit Backward Euler technique known as the Closest Point Projection Method. The model was implemented into a finite element code in order to study nonlinear boundary value problems associated with homogeneous as well as nonhomogeneous deformations at low as well as high effective stresses. The effect of gravity (self-weight) on the stress-strain-strength response of these materials is evaluated. The calibration of the model is performed via three techniques: (1) physical identification, (2) optimized calibration at the constitutive level, and (3) optimized calibration at the finite element level (Inverse Identification). Activities are summarized in graphic and outline form.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedlunn, Benjamin
Room D was an in-situ, isothermal, underground experiment conducted at the Waste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under-predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under-predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedlunn, Benjamin
Room D was an in-situ, isothermal, underground experiment conducted at theWaste Isolation Pilot Plant between 1984 and 1991. The room was carefully instrumented to measure the horizontal and vertical closure immediately upon excavation and for several years thereafter. Early finite element simulations of salt creep around Room D under predicted the vertical closure by 4.5×, causing investigators to explore a series of changes to the way Room D was modeled. Discrepancies between simulations and measurements were resolved through a series of adjustments to model parameters, which were openly acknowledged in published reports. Interest in Room D has been rekindled recentlymore » by the U.S./German Joint Project III and Project WEIMOS, which seek to improve the predictions of rock salt constitutive models. Joint Project participants calibrate their models solely against laboratory tests, and benchmark the models against underground experiments, such as room D. This report describes updating legacy Room D simulations to today’s computational standards by rectifying several numerical issues. Subsequently, the constitutive model used in previous modeling is recalibrated two different ways against a suite of new laboratory creep experiments on salt extracted from the repository horizon of the Waste Isolation Pilot Plant. Simulations with the new, laboratory-based, calibrations under predict Room D vertical closure by 3.1×. A list of potential improvements is discussed.« less
NASA Technical Reports Server (NTRS)
Steen, Laura E.; Ide, Robert F.; Van Zante, Judith Foss
2017-01-01
The Icing Research Tunnel at NASA Glenn has recently switched to from using the Icing Blade to using the SEA Multi-Element Sensor (also known as the multi-wire) for its calibration of cloud liquid water content. In order to perform this transition, tests were completed to compare the Multi-Element Sensor to the Icing Blade, particularly with respect to liquid water content, airspeed, and drop size. The two instruments were found to compare well for the majority of Appendix C conditions. However, it was discovered that the Icing Blade under-measures when the conditions approach the Ludlam Limit. This paper also describes data processing procedures for the Multi-Element Sensor in the IRT, including collection efficiency corrections, mounting underneath a splitter plate, and correcting for a jump in the compensation wire power. Further data is presented to describe the repeatability of the IRT with the Multi-Element sensor, health-monitoring checks for the instrument, and a sensing-element configuration comparison.
The Extended HANDS Characterization and Analysis of Metric Biases
NASA Astrophysics Data System (ADS)
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.
Quantification of trace metals in infant formula premixes using laser-induced breakdown spectroscopy
NASA Astrophysics Data System (ADS)
Cama-Moncunill, Raquel; Casado-Gavalda, Maria P.; Cama-Moncunill, Xavier; Markiewicz-Keszycka, Maria; Dixit, Yash; Cullen, Patrick J.; Sullivan, Carl
2017-09-01
Infant formula is a human milk substitute generally based upon fortified cow milk components. In order to mimic the composition of breast milk, trace elements such as copper, iron and zinc are usually added in a single operation using a premix. The correct addition of premixes must be verified to ensure that the target levels in infant formulae are achieved. In this study, a laser-induced breakdown spectroscopy (LIBS) system was assessed as a fast validation tool for trace element premixes. LIBS is a promising emission spectroscopic technique for elemental analysis, which offers real-time analyses, little to no sample preparation and ease of use. LIBS was employed for copper and iron determinations of premix samples ranging approximately from 0 to 120 mg/kg Cu/1640 mg/kg Fe. LIBS spectra are affected by several parameters, hindering subsequent quantitative analyses. This work aimed at testing three matrix-matched calibration approaches (simple-linear regression, multi-linear regression and partial least squares regression (PLS)) as means for precision and accuracy enhancement of LIBS quantitative analysis. All calibration models were first developed using a training set and then validated with an independent test set. PLS yielded the best results. For instance, the PLS model for copper provided a coefficient of determination (R2) of 0.995 and a root mean square error of prediction (RMSEP) of 14 mg/kg. Furthermore, LIBS was employed to penetrate through the samples by repetitively measuring the same spot. Consequently, LIBS spectra can be obtained as a function of sample layers. This information was used to explore whether measuring deeper into the sample could reduce possible surface-contaminant effects and provide better quantifications.
Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann
2014-01-01
To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208
Assessing the Validity of the Simplified Potential Energy Clock Model for Modeling Glass-Ceramics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamison, Ryan Dale; Grillet, Anne M.; Stavig, Mark E.
Glass-ceramic seals may be the future of hermetic connectors at Sandia National Laboratories. They have been shown capable of surviving higher temperatures and pressures than amorphous glass seals. More advanced finite-element material models are required to enable model-based design and provide evidence that the hermetic connectors can meet design requirements. Glass-ceramics are composite materials with both crystalline and amorphous phases. The latter gives rise to (non-linearly) viscoelastic behavior. Given their complex microstructures, glass-ceramics may be thermorheologically complex, a behavior outside the scope of currently implemented constitutive models at Sandia. However, it was desired to assess if the Simplified Potential Energymore » Clock (SPEC) model is capable of capturing the material response. Available data for SL 16.8 glass-ceramic was used to calibrate the SPEC model. Model accuracy was assessed by comparing model predictions with shear moduli temperature dependence and high temperature 3-point bend creep data. It is shown that the model can predict the temperature dependence of the shear moduli and 3- point bend creep data. Analysis of the results is presented. Suggestions for future experiments and model development are presented. Though further calibration is likely necessary, SPEC has been shown capable of modeling glass-ceramic behavior in the glass transition region but requires further analysis below the transition region.« less
The NIF x-ray spectrometer calibration campaign at Omega.
Pérez, F; Kemp, G E; Regan, S P; Barrios, M A; Pino, J; Scott, H; Ayers, S; Chen, H; Emig, J; Colvin, J D; Bedzyk, M; Shoup, M J; Agliata, A; Yaakobi, B; Marshall, F J; Hamilton, R A; Jaquez, J; Farrell, M; Nikroo, A; Fournier, K B
2014-11-01
The calibration campaign of the National Ignition Facility X-ray Spectrometer (NXS) was carried out at the Omega laser facility. Spherically symmetric, laser-driven, millimeter-scale x-ray sources of K-shell and L-shell emission from various mid-Z elements were designed for the 2-18 keV energy range of the NXS. The absolute spectral brightness was measured by two calibrated spectrometers. We compare the measured performance of the target design to radiation hydrodynamics simulations.
NASA Astrophysics Data System (ADS)
Suzuki, Yoshinari; Sato, Hikaru; Hiyoshi, Katsuhiro; Furuta, Naoki
2012-10-01
A new calibration system for real-time determination of trace elements in airborne particulates was developed. Airborne particulates were directly introduced into an inductively coupled plasma mass spectrometer, and the concentrations of 15 trace elements were determined by means of an external calibration method. External standard solutions were nebulized by an ultrasonic nebulizer (USN) coupled with a desolvation system, and the resulting aerosol was introduced into the plasma. The efficiency of sample introduction via the USN was calculated by two methods: (1) the introduction of a Cr standard solution via the USN was compared with introduction of a Cr(CO)6 standard gas via a standard gas generator and (2) the aerosol generated by the USN was trapped on filters and then analyzed. The Cr introduction efficiencies obtained by the two methods were the same, and the introduction efficiencies of the other elements were equal to the introduction efficiency of Cr. Our results indicated that our calibration method for introduction efficiency worked well for the 15 elements (Ti, V, Cr, Mn, Co, Ni, Cu, Zn, As, Mo, Sn, Sb, Ba, Tl and Pb). The real-time data and the filter-collection data agreed well for elements with low-melting oxides (V, Co, As, Mo, Sb, Tl, and Pb). In contrast, the real-time data were smaller than the filter-collection data for elements with high-melting oxides (Ti, Cr, Mn, Ni, Cu, Zn, Sn, and Ba). This result implies that the oxides of these 8 elements were not completely fused, vaporized, atomized, and ionized in the initial radiation zone of the inductively coupled plasma. However, quantitative real-time monitoring can be realized after correction for the element recoveries which can be calculated from the ratio of real-time data/filter-collection data.
Boeing infrared sensor (BIRS) calibration facility
NASA Technical Reports Server (NTRS)
Hazen, John D.; Scorsone, L. V.
1990-01-01
The Boeing Infrared Sensor (BIRS) Calibration Facility represents a major capital investment in optical and infrared technology. The facility was designed and built for the calibration and testing of the new generation large aperture long wave infrared (LWIR) sensors, seekers, and related technologies. Capability exists to perform both radiometric and goniometric calibrations of large infrared sensors under simulated environmental operating conditions. The system is presently configured for endoatmospheric calibrations with a uniform background field which can be set to simulate the expected mission background levels. During calibration, the sensor under test is also exposed to expected mission temperatures and pressures within the test chamber. Capability exists to convert the facility for exoatmospheric testing. The configuration of the system is described along with hardware elements and changes made to date are addressed.
Bernard J. Wood Receives 2013 Harry H. Hess Medal: Citation
NASA Astrophysics Data System (ADS)
Hofmann, Albrecht W.
2014-01-01
As Harry Hess recognized over 50 years ago, mantle melting is the fundamental motor for planetary evolution and differentiation. Melting generates the major divisions of crust mantle and core. The distribution of chemical elements between solids, melts, and gaseous phases is fundamental to understanding these differentiation processes. Bernie Wood, together with Jon Blundy, has combined experimental petrology and physicochemical theory to revolutionize the understanding of the distribution of trace elements between melts and solids in the Earth. Knowledge of these distribution laws allows the reconstruction of the source compositions of the melts (deep in Earth's interior) from their abundances in volcanic rocks. Bernie's theoretical treatment relates the elastic strain of the lattice caused by the substitution of a trace element in a crystal to the ionic radius and charge of this element. This theory, and its experimental calibrations, brought order to a literature of badly scattered, rather chaotic experimental data that allowed no satisfactory quantitative modeling of melting processes in the mantle.
Hondrogiannis, Ellen; Rotta, Kathryn; Zapf, Charles M
2013-03-01
Sixteen elements found in 37 vanilla samples from Madagascar, Uganda, India, Indonesia (all Vanilla planifolia species), and Papa New Guinea (Vanilla tahitensis species) were measured by wavelength dispersive X-ray fluorescence (WDXRF) spectroscopy for the purpose of determining the elemental concentrations to discriminate among the origins. Pellets were prepared of the samples and elemental concentrations were calculated based on calibration curves created using 4 Natl. Inst. of Standards and Technology (NIST) standards. Discriminant analysis was used to successfully classify the vanilla samples by their species and their geographical region. Our method allows for higher throughput in the rapid screening of vanilla samples in less time than analytical methods currently available. Wavelength dispersive X-ray fluorescence spectroscopy and discriminant function analysis were used to classify vanilla from different origins resulting in a model that could potentially serve to rapidly validate these samples before purchasing from a producer. © 2013 Institute of Food Technologists®
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
A theoretical model of speed-dependent steering torque for rolling tyres
NASA Astrophysics Data System (ADS)
Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing
2016-04-01
It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.
NASA Astrophysics Data System (ADS)
Hamim, Salah Uddin Ahmed
Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. However, due to various nonlinearities associated with nanoindentation the process of interpreting load-displacement data into material properties is difficult. Although, simple elastic behavior can be characterized easily, a method to characterize complicated material behavior such as nonlinear viscoelasticity is still lacking. In this study, a nanoindentation-based material characterization technique is developed to characterize soft materials exhibiting nonlinear viscoelasticity. Nanoindentation experiment was modeled in finite element analysis software (ABAQUS), where a nonlinear viscoelastic behavior was incorporated using user-defined subroutine (UMAT). The model parameters were calibrated using a process called inverse analysis. In this study, a surrogate model-based approach was used for the inverse analysis. The different factors affecting the surrogate model performance are analyzed in order to optimize the performance with respect to the computational cost.
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lian, Ji-Jian; Li, Qin; Deng, Xi-Fei; Zhao, Gao-Feng; Chen, Zu-Yu
2018-02-01
In this work, toppling failure of a jointed rock slope is studied by using the distinct lattice spring model (DLSM). The gravity increase method (GIM) with a sub-step loading scheme is implemented in the DLSM to mimic the loading conditions of a centrifuge test. A classical centrifuge test for a jointed rock slope, previously simulated by the finite element method and the discrete element model, is simulated by using the GIM-DLSM. Reasonable boundary conditions are obtained through detailed comparisons among existing numerical solutions with experimental records. With calibrated boundary conditions, the influences of the tensional strength of the rock block, cohesion and friction angles of the joints, as well as the spacing and inclination angles of the joints, on the flexural toppling failure of the jointed rock slope are investigated by using the GIM-DLSM, leading to some insight into evaluating the state of flexural toppling failure for a jointed slope and effectively preventing the flexural toppling failure of jointed rock slopes.
2006-05-01
d). (e) In the histogram analysis eld units are observed initially for voxels located on the d to 250 Hounsfield units.ses (a) el the tration...CT10, CT20, and CT30. Histogram ximum difference of 250 Hounsfield units . Only 0.01% d units.d imag ts a mand finite-element model. The fluid flow...cause Hounsfield unit calibration problems. While this does not seem to influence the image registration, the use of CBCT for dose calculation should
Development and analysis of closed cycle circulator elements. Final report 31 Jul 978-31 May 1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shih, C.C.; Karr, G.R.; Perkins, J.F.
1980-05-01
A series of experiments with various flow rates of laser gas and coolants under several levels of energy inputs has been conducted on the Army Closed Cycle Circulator for pulsed EDL to collect sufficient data for flow calibration and coefficient determination. Verification of the theoretical models depicting the functions of the heat exchangers in maintaining the thermal balance in the flow through the steady and transient states are made through comparison with results of the experimental analysis.
Shake Test Results and Dynamic Calibration Efforts for the Large Rotor Test Apparatus
NASA Technical Reports Server (NTRS)
Russell, Carl R.
2014-01-01
Prior to the full-scale wind tunnel test of the UH-60A Airloads rotor, a shake test was completed on the Large Rotor Test Apparatus. The goal of the shake test was to characterize the oscillatory response of the test rig and provide a dynamic calibration of the balance to accurately measure vibratory hub loads. This paper provides a summary of the shake test results, including balance, shaft bending gauge, and accelerometer measurements. Sensitivity to hub mass and angle of attack were investigated during the shake test. Hub mass was found to have an important impact on the vibratory forces and moments measured at the balance, especially near the UH-60A 4/rev frequency. Comparisons were made between the accelerometer data and an existing finite-element model, showing agreement on mode shapes, but not on natural frequencies. Finally, the results of a simple dynamic calibration are presented, showing the effects of changes in hub mass. The results show that the shake test data can be used to correct in-plane loads measurements up to 10 Hz and normal loads up to 30 Hz.
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Improving a regional model using reduced complexity and parameter estimation.
Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Parametric Design of Injectors for LDI-3 Combustors
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Mongia, Hukam; Lee, Phil
2015-01-01
Application of a partially calibrated National Combustion Code (NCC) for providing guidance in the design of the 3rd generation of the Lean-Direct Injection (LDI) multi-element combustion configuration (LDI-3) is summarized. NCC was used to perform non-reacting and two-phase reacting flow computations on several LDI-3 injector configurations in a single-element and a five-element injector array. All computations were performed with a consistent approach for mesh-generation, turbulence, spray simulations, ignition and chemical kinetics-modeling. Both qualitative and quantitative assessment of the computed flowfield characteristics of the several design options led to selection of an optimal injector LDI- 3 design that met all the requirements including effective area, aerodynamics and fuel-air mixing criteria. Computed LDI-3 emissions (namely, NOx, CO and UHC) will be compared with the prior generation LDI- 2 combustor experimental data at relevant engine cycle conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Jordana R.; Gill, Gary A.; Kuo, Li-Jung
2016-04-20
Trace element determinations in seawater by inductively coupled plasma mass spectrometry are analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. In this study, we did a comparison for uranium analysis using inductively coupled plasma mass spectrometry (ICP-MS) of Sequim Bay seawater samples and three seawater certified reference materials (SLEW-3, CASS-5 and NASS-6) using seven different analytical approaches. The methods evaluated include: direct analysis, Fe/Pd reductive precipitation, standard addition calibration, online automated dilution using an external calibration with and without matrix matching, and online automated pre-concentration. The methodmore » which produced the most accurate results was the method of standard addition calibration, recovering uranium from a Sequim Bay seawater sample at 101 ± 1.2%. The on-line preconcentration method and the automated dilution with matrix-matched calibration method also performed well. The two least effective methods were the direct analysis and the Fe/Pd reductive precipitation using sodium borohydride« less
Parallel computing for automated model calibration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.
2002-07-29
Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less
Detailed in situ laser calibration of the infrared imaging video bolometer for the JT-60U tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parchamy, H.; Peterson, B. J.; Konoshima, S.
2006-10-15
The infrared imaging video bolometer (IRVB) in JT-60U includes a single graphite-coated gold foil with an effective area of 9x7 cm{sup 2} and a thickness of 2.5 {mu}m. The thermal images of the foil resulting from the plasma radiation are provided by an IR camera. The calibration technique of the IRVB gives confidence in the absolute levels of the measured values of the plasma radiation. The in situ calibration is carried out in order to obtain local foil properties such as the thermal diffusivity {kappa} and the product of the thermal conductivity k and the thickness t{sub f} of themore » foil. These quantities are necessary for solving the two-dimensional heat diffusion equation of the foil which is used in the experiments. These parameters are determined by comparing the measured temperature profiles (for kt{sub f}) and their decays (for {kappa}) with the corresponding results of a finite element model using the measured HeNe laser power profile as a known radiation power source. The infrared camera (Indigo/Omega) is calibrated by fitting the temperature rise of a heated plate to the resulting camera data using the Stefan-Boltzmann law.« less
ERIC Educational Resources Information Center
Arnold, Randy J.; Arndt, Brett; Blaser, Emilia; Blosser, Chris; Caulton, Dana; Chung, Won Sog; Fiorenza, Garrett; Heath, Wyatt; Jacobs, Alex; Kahng, Eunice; Koh, Eun; Le, Thao; Mandla, Kyle; McCory, Chelsey; Newman, Laura; Pithadia, Amit; Reckelhoff, Anna; Rheinhardt, Joseph; Skljarevski, Sonja; Stuart, Jordyn; Taylor, Cassie; Thomas, Scott; Tse, Kyle; Wall, Rachel; Warkentien, Chad
2011-01-01
A multivitamin tablet and liquid are analyzed for the elements calcium, magnesium, iron, zinc, copper, and manganese using atomic absorption spectrometry. Linear calibration and standard addition are used for all elements except calcium, allowing for an estimate of the matrix effects encountered for this complex sample. Sample preparation using…
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
In order to establish transmitter power and receiver sensitivity levels at frequencies above 10 GHz, the designers of earth-satellite telecommunication systems are interested in cumulative rain fade statistics at variable path orientations, elevation angles, climatological regions, and frequencies. They are also interested in establishing optimum space diversity performance parameters. In this work are examined the many elements involved in the employment of single non-attenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain attenuation modeling. Suggestions are made for improving present methods.
Laser ablation ICP-MS applications using the timescales of geologic and biologic processes
NASA Astrophysics Data System (ADS)
Ridley, W. I.
2003-04-01
Geochemists commonly examine geologic processes on timescales of 10^4--10^9 years, and accept that often age relations, e.g., chemical zoning in minerals, can only be measured in a relative sense. The progression of a geologic process that involves geochemical changes may be assessed using trace element microbeam techniques, because the textural, and therefore spatial context, of the analytical scheme can be preserved. However, quantification requires appropriate calibration standards. Laser ablation ICP-MS (LA-ICP-MS) is proving particularly useful now that appropriate standards are becoming available. For instance, trace element zoning patterns in primary sulfides (e.g., pyrite, sphalerite, chalcopyrite, galena) and secondary phases can be inverted to examine relative changes in fluid composition during cycles of hydrothermal mineralization. In turn such information provides insights into fluid sources, migration pathways and depositional processes. These studies have only become possible with the development of appropriate sulfide calibration standards. Another example, made possible with the development of appropriate silicate calibration standards, is the quantitative spatial mapping of REE variations in amphibolite-grade garnets. The recognition that the trace and major elements are decoupled provides a better understanding of the various sources of elements during metamorphic re-equilibration. There is also a growing realization that LA-ICP-MS has potential in biochemical studies, and geochemists have begun to turn their attention in this direction, working closely with biologists. Unlike many geologic processes, the timescales of biologic processes are measured in years to centuries and are frequently amenable to absolute dating. Examples that can be cited where LA-ICP-MS has been applied include annual trace metal variations in tree rings, corals, teeth, bones, bird feathers and various animal vibrissae (sea lion, walrus, wolf). The aim of such studies is to correlate trace element variations with changes in environmental variables. Such studies are proving informative in climate change and habitat management. Again, such variations have been quantified with the availability of appropriate organic, carbonate and phosphate calibration standards.
Dunning, Charles P.; Mueller, Gregory D.; Juckem, Paul F.
2008-01-01
An analytic element ground-water-flow model was constructed to help understand the ground-water-flow system in the vicinity of the Ho-Chunk Nation communities of Indian Mission and Sand Pillow in Jackson County, Wisconsin. Data from interpretive reports, well-drillers' construction reports, and an exploratory augering program in 2003 indicate that sand and gravel of varying thickness (0-150 feet[ft]) and porous sandstone make up a composite aquifer that overlies Precambrian crystalline rock. The geometric mean values for horizontal hydraulic conductivity were estimated from specific-capacity data to be 61.3 feet per day (ft/d) for sand and gravel, 6.6 ft/d for sandstone, and 12.0 ft/d for the composite aquifer. A ground-water flow model was constructed, the near field of which encompassed the Levis and Morrison Creeks Watershed. The flow model was coupled to the parameter-estimation program UCODE to obtain a best fit between simulated and measured values of ground-water levels and estimated Q50 flow duration (base flow). Calibration of the model with UCODE provided a ground-water recharge rate of 9 inches per year and a horizontal hydraulic conductivity of 13 ft/d for the composite aquifer. Using these calibrated parameter values, simulated heads from the model were on average within 5 ft of the measured water levels. In addition, these parameter values provided an acceptable base-flow calibration for Hay, Dickey, and Levis Creeks; the calibration was particularly close for Levis Creek, which was the most frequently measured stream in the study area. The calibrated model was used to simulate ground-water levels and to determine the direction of ground-water flow in the vicinity of Indian Mission and Sand Pillow communities. Backward particle tracking was conducted for Sand Pillow production wells under two pumping simulations to determine their 20-year contributing areas. In the first simulation, new production wells 6, 7, and 8 were each pumped at 50 gallons per minute (gal/min). In the second simulation, new production wells 6, 7, and 8 and existing production well 5 were each pumped at 50 gal/min. The second simulation demonstrated interference between the existing production well 5 and the new production wells when all were pumping at 50 gal/min.
NASA Astrophysics Data System (ADS)
Zheng, Lijuan; Cao, Fan; Xiu, Junshan; Bai, Xueshi; Motto-Ros, Vincent; Gilon, Nicole; Zeng, Heping; Yu, Jin
2014-09-01
Laser-induced breakdown spectroscopy (LIBS) provides a technique to directly determine metals in viscous liquids and especially in lubricating oils. A specific laser ablation configuration of a thin layer of oil applied on the surface of a pure aluminum target was used to evaluate the analytical figures of merit of LIBS for elemental analysis of lubricating oils. Among the analyzed oils, there were a certified 75cSt blank mineral oil, 8 virgin lubricating oils (synthetic, semi-synthetic, or mineral and of 2 different manufacturers), 5 used oils (corresponding to 5 among the 8 virgin oils), and a cooking oil. The certified blank oil and 4 virgin lubricating oils were spiked with metallo-organic standards to obtain laboratory reference samples with different oil matrix. We first established calibration curves for 3 elements, Fe, Cr, Ni, with the 5 sets of laboratory reference samples in order to evaluate the matrix effect by the comparison among the different oils. Our results show that generalized calibration curves can be built for the 3 analyzed elements by merging the measured line intensities of the 5 sets of spiked oil samples. Such merged calibration curves with good correlation of the merged data are only possible if no significant matrix effect affects the measurements of the different oils. In the second step, we spiked the remaining 4 virgin oils and the cooking oils with Fe, Cr and Ni. The accuracy and the precision of the concentration determination in these prepared oils were then evaluated using the generalized calibration curves. The concentrations of metallic elements in the 5 used lubricating oils were finally determined.
NASA Technical Reports Server (NTRS)
Contreras, Michael T.; Peng, Chia-Yen; Wang, Dongdong; Chen, Jiun-Shyan
2012-01-01
A wheel experiencing sinkage and slippage events poses a high risk to rover missions as evidenced by recent mobility challenges on the Mars Exploration Rover (MER) project. Because several factors contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc., there are significant benefits to modeling these events to a sufficient degree of complexity. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree finite element approaches enable simulations that capture sufficient detail of wheel-soil interaction while remaining computationally feasible. This study demonstrates some of the large deformation modeling capability of meshfree methods and the realistic solutions obtained by accounting for the soil material properties. A benchmark wheel-soil interaction problem is developed and analyzed using a specific class of meshfree methods called Reproducing Kernel Particle Method (RKPM). The benchmark problem is also analyzed using a commercially available finite element approach with Lagrangian meshing for comparison. RKPM results are comparable to classical pressure-sinkage terramechanics relationships proposed by Bekker-Wong. Pending experimental calibration by future work, the meshfree modeling technique will be a viable simulation tool for trade studies assisting rover wheel design.
X-ray fluorescence analysis of K, Al and trace elements in chloroaluminate melts
NASA Astrophysics Data System (ADS)
Shibitko, A. O.; Abramov, A. V.; Denisov, E. I.; Lisienko, D. G.; Rebrin, O. I.; Bunkov, G. M.; Rychkov, V. N.
2017-09-01
Energy dispersive x-ray fluorescence spectrometry was applied to quantitative determination of K, Al, Cr, Fe and Ni in chloroaluminate melts. To implement the external standard calibration method, an unconventional way of samples preparation was suggested. A mixture of metal chlorides was melted in a quartz cell at 350-450 °C under a slightly excessive pressure of purified argon (99.999 %). The composition of the calibration samples (CSs) prepared was controlled by means of the inductively coupled plasma atomic emission spectrometry (ICP-AES). The optimal conditions for analytical lines excitation were determined, the analytes calibration curves were obtained. There was some influence of matrix effects in synthesized samples on the analytical signal of some elements. The CSs are to be stored in inert gas atmosphere. The precision, accuracy, and reproducibility factors of the quantitative chemical analysis were computed.
NASA Astrophysics Data System (ADS)
Donado-Garzon, L. D.; Pardo, Y.
2013-12-01
Fractured media are very heterogeneous systems where occur complex physical and chemical processes to model. One of the possible approaches to conceptualize this type of massifs is the Discrete Fracture Network (DFN). Donado et al., modeled flow and transport in a granitic batholith based on this approach and found good fitting with hydraulic and tracer tests, but the computational cost was excessive due to a gigantic amount of elements to model. We present in this work a methodology based on percolation theory for reducing the number of elements and in consequence, to reduce the bandwidth of the conductance matrix and the execution time of each network. DFN poses as an excellent representation of all the set of fractures of the media, but not all the fractures of the media are part of the conductive network. Percolation theory is used to identify which nodes or fractures are not conductive, based on the occupation probability or percolation threshold. In a fractured system, connectivity determines the flow pattern in the fractured rock mass. This volume of fluid is driven through connection paths formed by the fractures, when the permeability of the rock is negligible compared to the fractures. In a population of distributed fractures, each of this that has no intersection with any connected fracture do not contribute to generate a flow field. This algorithm also permits us to erase these elements however they are water conducting and hence, refine even more the backbone of the network. We used 100 different generations of DFN that were optimized in this study using percolation theory. In each of the networks calibrate hydrodynamic parameters as hydraulic conductivity and specific storage coefficient, for each of the five families of fractures, yielding a total of 10 parameters to estimate, at each generation. Since the effects of the distribution of fault orientation changes the value of the percolation threshold, but not the universal laws of classical percolation theory, the latter is applicable to such networks. Under these conditions, percolation theory permit us to reduced the number of elements (90% in average) that form clusters of the 100 DFNs, preserving the so-called backbone. In this way the calibration runs in these networks changed from several hours to just a second obtaining much better results.
Characterisation methods for the hyperspectral sensor HySpex at DLR's calibration home base
NASA Astrophysics Data System (ADS)
Baumgartner, Andreas; Gege, Peter; Köhler, Claas; Lenhard, Karim; Schwarzmaier, Thomas
2012-09-01
The German Aerospace Center's (DLR) Remote Sensing Technology Institute (IMF) operates a laboratory for the characterisation of imaging spectrometers. Originally designed as Calibration Home Base (CHB) for the imaging spectrometer APEX, the laboratory can be used to characterise nearly every airborne hyperspectral system. Characterisation methods will be demonstrated exemplarily with HySpex, an airborne imaging spectrometer system from Norsk Elektro Optikks A/S (NEO). Consisting of two separate devices (VNIR-1600 and SWIR-320me) the setup covers the spectral range from 400 nm to 2500 nm. Both airborne sensors have been characterised at NEO. This includes measurement of spectral and spatial resolution and misregistration, polarisation sensitivity, signal to noise ratios and the radiometric response. The same parameters have been examined at the CHB and were used to validate the NEO measurements. Additionally, the line spread functions (LSF) in across and along track direction and the spectral response functions (SRF) for certain detector pixels were measured. The high degree of lab automation allows the determination of the SRFs and LSFs for a large amount of sampling points. Despite this, the measurement of these functions for every detector element would be too time-consuming as typical detectors have 105 elements. But with enough sampling points it is possible to interpolate the attributes of the remaining pixels. The knowledge of these properties for every detector element allows the quantification of spectral and spatial misregistration (smile and keystone) and a better calibration of airborne data. Further laboratory measurements are used to validate the models for the spectral and spatial properties of the imaging spectrometers. Compared to the future German spaceborne hyperspectral Imager EnMAP, the HySpex sensors have the same or higher spectral and spatial resolution. Therefore, airborne data will be used to prepare for and validate the spaceborne system's data.
Poole, Sandra; Vis, Marc; Knight, Rodney; Seibert, Jan
2017-01-01
Ecologically relevant streamflow characteristics (SFCs) of ungauged catchments are often estimated from simulated runoff of hydrologic models that were originally calibrated on gauged catchments. However, SFC estimates of the gauged donor catchments and subsequently the ungauged catchments can be substantially uncertain when models are calibrated using traditional approaches based on optimization of statistical performance metrics (e.g., Nash–Sutcliffe model efficiency). An improved calibration strategy for gauged catchments is therefore crucial to help reduce the uncertainties of estimated SFCs for ungauged catchments. The aim of this study was to improve SFC estimates from modeled runoff time series in gauged catchments by explicitly including one or several SFCs in the calibration process. Different types of objective functions were defined consisting of the Nash–Sutcliffe model efficiency, single SFCs, or combinations thereof. We calibrated a bucket-type runoff model (HBV – Hydrologiska Byråns Vattenavdelning – model) for 25 catchments in the Tennessee River basin and evaluated the proposed calibration approach on 13 ecologically relevant SFCs representing major flow regime components and different flow conditions. While the model generally tended to underestimate the tested SFCs related to mean and high-flow conditions, SFCs related to low flow were generally overestimated. The highest estimation accuracies were achieved by a SFC-specific model calibration. Estimates of SFCs not included in the calibration process were of similar quality when comparing a multi-SFC calibration approach to a traditional model efficiency calibration. For practical applications, this implies that SFCs should preferably be estimated from targeted runoff model calibration, and modeled estimates need to be carefully interpreted.
NASA Technical Reports Server (NTRS)
Mahan, J. R.; Tira, N. E.; Lee, Robert B., III; Keynton, R. J.
1989-01-01
The Earth Radiation Budget Experiment consists of an array of radiometric instruments placed in earth orbit by the National Aeronautics and Space Administration to monitor the longwave and visible components of the earth's radiation budget. Presented is a dynamic electrothermal model of the active cavity radiometer used to measure the earth's total radiative exitance. Radiative exchange is modeled using the Monte Carlo method and transient conduction is treated using the finite element method. Also included is the feedback circuit which controls electrical substitution heating of the cavity. The model is shown to accurately predict the dynamic response of the instrument during solar calibration.
NASA Technical Reports Server (NTRS)
Pereira, J. M.; Revilock, D. M.
2004-01-01
Under the Federal Aviation Administration's Airworthiness Assurance Center of Excellence and the Aircraft Catastrophic Failure Prevention Program, National Aeronautics and Space Administration Glenn Research Center collaborated with Arizona State University, Honeywell Engines, Systems and Services, and SRI International to develop improved computational models for designing fabric-based engine containment systems. In the study described in this report, ballistic impact tests were conducted on layered dry fabric rings to provide impact response data for calibrating and verifying the improved numerical models. This report provides data on projectile velocity, impact and residual energy, and fabric deformation for a number of different test conditions.
NASA Astrophysics Data System (ADS)
Arantes Camargo, Livia; Marques Júnior, José; Reynaldo Ferracciú Alleoni, Luís; Tadeu Pereira, Gener; De Bortoli Teixeira, Daniel; Santos Rabelo de Souza Bahia, Angélica
2017-04-01
Environmental impact assessments may be assisted by spatial characterization of potentially toxic elements (PTEs). Diffuse reflectance spectroscopy (DRS) and X-ray fluorescence spectroscopy (XRF) are rapid, non-destructive, low-cost, prediction tools for a simultaneous characterization of different soil attributes. Although low concentrations of PTEs might preclude the observation of spectral features, their contents can be predicted using spectroscopy by exploring the existing relationship between the PTEs and soil attributes with spectral features. This study aimed to evaluate, in three geomorphic surfaces of Oxisols, the capacity for predicting PTEs (Ba, Co, and Ni) and their spatial variability by means of diffuse reflectance spectroscopy (DRS) and X-ray fluorescence spectroscopy (XRF). For that, soil samples were collected from three geomorphic surfaces and analyzed for chemical, physical, and mineralogical properties, and then analyzed in DRS (visible + near infrared - VIS+NIR and medium infrared - MIR) and XRF equipment. PTE prediction models were calibrated using partial least squares regression (PLSR). PTE spatial distribution maps were built using the values calculated by the calibrated models that reached the best accuracy using geostatistics. PTE prediction models were satisfactorily calibrated using MIR DRS for Ba, and Co (residual prediction deviation - RPD > 3.0), Vis DRS for Ni (RPD > 2.0) and FRX for all the studied PTEs (RPD > 1.8). DRS- and XRF-predicted values allowed the characterization and the understanding of spatial variability of the studied PTEs.
Multistate modelling extended by behavioural rules: An application to migration.
Klabunde, Anna; Zinn, Sabine; Willekens, Frans; Leuchter, Matthias
2017-10-01
We propose to extend demographic multistate models by adding a behavioural element: behavioural rules explain intentions and thus transitions. Our framework is inspired by the Theory of Planned Behaviour. We exemplify our approach with a model of migration from Senegal to France. Model parameters are determined using empirical data where available. Parameters for which no empirical correspondence exists are determined by calibration. Age- and period-specific migration rates are used for model validation. Our approach adds to the toolkit of demographic projection by allowing for shocks and social influence, which alter behaviour in non-linear ways, while sticking to the general framework of multistate modelling. Our simulations yield that higher income growth in Senegal leads to higher emigration rates in the medium term, while a decrease in fertility yields lower emigration rates.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
A finite element model of rigid body structures actuated by dielectric elastomer actuators
NASA Astrophysics Data System (ADS)
Simone, F.; Linnebach, P.; Rizzello, G.; Seelecke, S.
2018-06-01
This paper presents on finite element (FE) modeling and simulation of dielectric elastomer actuators (DEAs) coupled with articulated structures. DEAs have proven to represent an effective transduction technology for the realization of large deformation, low-power consuming, and fast mechatronic actuators. However, the complex dynamic behavior of the material, characterized by nonlinearities and rate-dependent phenomena, makes it difficult to accurately model and design DEA systems. The problem is further complicated in case the DEA is used to activate articulated structures, which increase both system complexity and implementation effort of numerical simulation models. In this paper, we present a model based tool which allows to effectively implement and simulate complex articulated systems actuated by DEAs. A first prototype of a compact switch actuated by DEA membranes is chosen as reference study to introduce the methodology. The commercially available FE software COMSOL is used for implementing and coupling a physics-based dynamic model of the DEA with the external structure, i.e., the switch. The model is then experimentally calibrated and validated in both quasi-static and dynamic loading conditions. Finally, preliminary results on how to use the simulation tool to optimize the design are presented.
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
MacKinnon, D.J.; Clow, G.D.; Tigges, R.K.; Reynolds, R.L.; Chavez, P.S.
2004-01-01
The vulnerability of dryland surfaces to wind erosion depends importantly on the absence or the presence and character of surface roughness elements, such as plants, clasts, and topographic irregularities that diminish wind speed near the surface. A model for the friction velocity ratio has been developed to account for wind sheltering by many different types of co-existing roughness elements. Such conditions typify a monitored area in the central Mojave Desert, California, that experiences frequent sand movement and dust emission. Two additional models are used to convert the friction velocity ratio to the surface roughness length (zo) for momentum. To calculate roughness lengths from these models, measurements were made at 11 sites within the monitored area to characterize the surface roughness element. Measurements included (1) the number of roughness species (e.g., plants, small-scale topography, clasts), and their associated heights and widths, (2) spacing among species, and (3) vegetation porosity (a measurement of the spatial distribution of woody elements of a plant). Documented or estimated values of drag coefficients for different species were included in the modeling. At these sites, wind-speed profiles were measured during periods of neutral atmospheric stability using three 9-m towers with three or four calibrated anemometers on each. Modeled roughness lengths show a close correspondence (correlation coefficient, 0.84-0.86) to the aerodynamically determined values at the field sites. The geometric properties of the roughness elements in the model are amenable to measurement at much higher temporal and spatial resolutions using remote-sensing techniques than can be accomplished through laborious ground-based methods. A remote-sensing approach to acquire values of the modeled roughness length is particularly important for the development of linked surface/atmosphere wind-erosion models sensitive to climate variability and land-use changes in areas such as the southwestern United States, where surface roughness has large spatial and temporal variations. ?? 2004 Elsevier B.V. All rights reserved.
Pernik, Meribeth
1987-01-01
The sensitivity of a multilayer finite-difference regional flow model was tested by changing the calibrated values for five parameters in the steady-state model and one in the transient-state model. The parameters that changed under the steady-state condition were those that had been routinely adjusted during the calibration process as part of the effort to match pre-development potentiometric surfaces, and elements of the water budget. The tested steady-state parameters include: recharge, riverbed conductance, transmissivity, confining unit leakance, and boundary location. In the transient-state model, the storage coefficient was adjusted. The sensitivity of the model to changes in the calibrated values of these parameters was evaluated with respect to the simulated response of net base flow to the rivers, and the mean value of the absolute head residual. To provide a standard measurement of sensitivity from one parameter to another, the standard deviation of the absolute head residual was calculated. The steady-state model was shown to be most sensitive to changes in rates of recharge. When the recharge rate was held constant, the model was more sensitive to variations in transmissivity. Near the rivers, the riverbed conductance becomes the dominant parameter in controlling the heads. Changes in confining unit leakance had little effect on simulated base flow, but greatly affected head residuals. The model was relatively insensitive to changes in the location of no-flow boundaries and to moderate changes in the altitude of constant head boundaries. The storage coefficient was adjusted under transient conditions to illustrate the model 's sensitivity to changes in storativity. The model is less sensitive to an increase in storage coefficient than it is to a decrease in storage coefficient. As the storage coefficient decreased, the aquifer drawdown increases, the base flow decreased. The opposite response occurred when the storage coefficient was increased. (Author 's abstract)
DOE Office of Scientific and Technical Information (OSTI.GOV)
English, Shawn Allen; Nelson, Stacy Michelle; Briggs, Timothy
Presented is a model verification and validation effort using low - velocity impact (LVI) of carbon fiber reinforced polymer laminate experiments. A flat cylindrical indenter impacts the laminate with enough energy to produce delamination, matrix cracks and fiber breaks. Included in the experimental efforts are ultrasonic scans of the damage for qualitative validation of the models. However, the primary quantitative metrics of validation are the force time history measured through the instrumented indenter and initial and final velocities. The simulations, whi ch are run on Sandia's Sierra finite element codes , consist of all physics and material parameters of importancemore » as determined by a sensitivity analysis conducted on the LVI simulation. A novel orthotropic damage and failure constitutive model that is cap able of predicting progressive composite damage and failure is described in detail and material properties are measured, estimated from micromechanics or optimized through calibration. A thorough verification and calibration to the accompanying experiment s are presented. Specia l emphasis is given to the four - point bend experiment. For all simulations of interest, the mesh and material behavior is verified through extensive convergence studies. An ensemble of simulations incorporating model parameter unc ertainties is used to predict a response distribution which is then compared to experimental output. The result is a quantifiable confidence in material characterization and model physics when simulating this phenomenon in structures of interest.« less
Coupled electromechanical model of the heart: Parallel finite element formulation.
Lafortune, Pierre; Arís, Ruth; Vázquez, Mariano; Houzeaux, Guillaume
2012-01-01
In this paper, a highly parallel coupled electromechanical model of the heart is presented and assessed. The parallel-coupled model is thoroughly discussed, with scalability proven up to hundreds of cores. This work focuses on the mechanical part, including the constitutive model (proposing some modifications to pre-existent models), the numerical scheme and the coupling strategy. The model is next assessed through two examples. First, the simulation of a small piece of cardiac tissue is used to introduce the main features of the coupled model and calibrate its parameters against experimental evidence. Then, a more realistic problem is solved using those parameters, with a mesh of the Oxford ventricular rabbit model. The results of both examples demonstrate the capability of the model to run efficiently in hundreds of processors and to reproduce some basic characteristic of cardiac deformation.
Reconstructing photorealistic 3D models from image sequence using domain decomposition method
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei
2009-11-01
In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.
Resistive RAMs as analog trimming elements
NASA Astrophysics Data System (ADS)
Aziza, H.; Perez, A.; Portal, J. M.
2018-04-01
This work investigates the use of Resistive Random Access Memory (RRAM) as an analog trimming device. The analog storage feature of the RRAM cell is evaluated and the ability of the RRAM to hold several resistance states is exploited to propose analog trim elements. To modulate the memory cell resistance, a series of short programming pulses are applied across the RRAM cell allowing a fine calibration of the RRAM resistance. The RRAM non volatility feature makes the analog device powers up already calibrated for the system in which the analog trimmed structure is embedded. To validate the concept, a test structure consisting of a voltage reference is evaluated.
X-Ray Fluorescence Determination of the Surface Density of Chromium Nanolayers
NASA Astrophysics Data System (ADS)
Mashin, N. I.; Chernjaeva, E. A.; Tumanova, A. N.; Ershov, A. A.
2014-01-01
An auxiliary system consisting of thin-film layers of chromium deposited on a polymer film substrate is used to construct calibration curves for the relative intensities of the K α lines of chromium on bulk substrates of different elements as functions of the chromium surface density in the reference samples. Correction coefficients are calculated to take into account the absorption of primary radiation from an x-ray tube and analytical lines of the constituent elements of the substrate. A method is developed for determining the surface density of thin films of chromium when test and calibration samples are deposited on substrates of different materials.
The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes
NASA Astrophysics Data System (ADS)
Bhatnagar, S.; Cornwell, T. J.
2017-11-01
This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.
The Pointing Self-calibration Algorithm for Aperture Synthesis Radio Telescopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatnagar, S.; Cornwell, T. J., E-mail: sbhatnag@nrao.edu
This paper is concerned with algorithms for calibration of direction-dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of direction-independent effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth–Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measuredmore » a priori. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms that scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal (PSC) algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the PSC algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.« less
Effect of Pumping on Groundwater Levels: A Case Study
NASA Astrophysics Data System (ADS)
Sindhu, G.; Vijayachandran, Lekshmi
2018-03-01
Groundwater is a major source for drinking and domestic purposes. Nowadays, extensive pumping has become a major issue of concern since pumping has led to rapid decline in the groundwater table, thus imposing landward gradient, leading to saline water intrusion especially in coastal areas. Groundwater pumping has seen its utmost effect on coastal aquifer systems, where the sea-ward gradient gets disturbed due to anthropogenic influences. Hence, a groundwater flow modelling of an aquifer system is essential for understanding the various hydro-geologic conditions, which can be used to study the responses of the aquifer system with regard to various pumping scenarios. Besides, a model helps to predict the water levels for the future period with respect to changing environment. In this study, a finite element groundwater flow model of a coastal aquifer system at Aakulam, Trivandrum district is developed, calibrated and simulated using the software Finite Element subsurface Flow system (FEFLOW 6.2).This simulated model is then used to predict the groundwater levels for a future 5 year period during pre monsoon and post monsoon season.
Effect of Pumping on Groundwater Levels: A Case Study
NASA Astrophysics Data System (ADS)
Sindhu, G.; Vijayachandran, Lekshmi
2018-06-01
Groundwater is a major source for drinking and domestic purposes. Nowadays, extensive pumping has become a major issue of concern since pumping has led to rapid decline in the groundwater table, thus imposing landward gradient, leading to saline water intrusion especially in coastal areas. Groundwater pumping has seen its utmost effect on coastal aquifer systems, where the sea-ward gradient gets disturbed due to anthropogenic influences. Hence, a groundwater flow modelling of an aquifer system is essential for understanding the various hydro-geologic conditions, which can be used to study the responses of the aquifer system with regard to various pumping scenarios. Besides, a model helps to predict the water levels for the future period with respect to changing environment. In this study, a finite element groundwater flow model of a coastal aquifer system at Aakulam, Trivandrum district is developed, calibrated and simulated using the software Finite Element subsurface Flow system (FEFLOW 6.2).This simulated model is then used to predict the groundwater levels for a future 5 year period during pre monsoon and post monsoon season.
NASA Technical Reports Server (NTRS)
Steen, Laura E.; Ide, Robert F.; Van Zante, Judith F.
2016-01-01
The Icing Research Tunnel at NASA Glenn has recently switched from using the Icing Blade to using the SEA Multi-Element Sensor (also known as the multi-wire) for its calibration of cloud liquid water content. In order to peform this transition, tests were completed to compare the Multi-Element Sensor to the Icing Blade, particularly with respect to liquid water content, airspeed, and drop size. The two instruments were found to compare well for the majority of Appendix C conditions. However, it was discovered that the Icing Blade under-measures when the conditions approach the Ludlam Limit. This paper also describes data processing procedures for the Multi-Element Sensor in the IRT, including collision efficiency corrections, mounting underneath a splitter plate, and correcting for a jump in the compensation wire power. Further data is presented to describe the repeatability of the IRT with the Multi-Element Sensor, health-monitoring checks for the instrument, and a sensing-element configuration comparison. Ultimately these tests showed that in the IRT, the multi-wire is a better instrument for measuring cloud liquid water content than the blade.
Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A
2005-07-01
Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.
Juckem, Paul F.; Hunt, Randall J.
2007-01-01
A two-dimensional, steady-state ground-water-flow model of Grindstone Creek, the New Post community, and the surrounding areas was developed using the analytic element computer code GFLOW. The parameter estimation code UCODE was used to obtain a best fit of the model to measured water levels and streamflows. The calibrated model was then used to simulate the effect of ground-water pumping on base flow in Grindstone Creek. Local refinements to the regional model were subsequently added in the New Post area, and local water-level data were used to evaluate the regional model calibration. The locally refined New Post model was also used to simulate the areal extent of capture for two existing water-supply wells and two possible replacement wells. Calibration of the regional Grindstone Creek simulation resulted in horizontal hydraulic conductivity values of 58.2 feet per day (ft/d) for the regional glacial and sandstone aquifer and 7.9 ft/d for glacial thrust-mass areas. Ground-water recharge in the calibrated regional model was 10.1 inches per year. Simulation of a golf-course irrigation well, located roughly 4,000 feet away from the creek, and pumping at 46 gallons per minute (0.10 cubic feet per second, ft3/s), reduced base flow in Grindstone Creek by 0.05 ft3/s, or 0.6 percent of the median base flow during water year 2003, compared to the calibrated model simulation without pumping. A simulation of peak pumping periods (347 gallons per minute or 0.77 ft3/s) reduced base flow in Grindstone Creek by 0.4 ft3/s (4.9 percent of the median base flow). Capture zones for existing and possible replacement wells delineated by the local New Post simulation extend from the well locations to an area south of the pumping well locations. Shallow crystalline bedrock, generally located south of the community, limits the extent of the aquifer and thus the southerly extent of the capture zones. Simulated steady-state pumping at a rate of 9,600 gallons per day (gal/d) from a possible replacement well near the Chippewa Flowage induced 70 gal/d of water from the flowage to enter the aquifer. Although no water-quality samples were collected from the Chippewa Flowage or the ground-water system, surface-water leakage into the ground-water system could potentially change the local water quality in the aquifer.
A Finite-Element Method Model of Soft Tissue Response to Impulsive Acoustic Radiation Force
Palmeri, Mark L.; Sharma, Amy C.; Bouchard, Richard R.; Nightingale, Roger W.; Nightingale, Kathryn R
2010-01-01
Several groups are studying acoustic radiation force and its ability to image the mechanical properties of tissue. Acoustic radiation force impulse (ARFI) imaging is one modality using standard diagnostic ultrasound scanners to generate localized, impulsive, acoustic radiation forces in tissue. The dynamic response of tissue is measured via conventional ultrasonic speckle-tracking methods and provides information about the mechanical properties of tissue. A finite-element method (FEM) model has been developed that simulates the dynamic response of tissues, with and without spherical inclusions, to an impulsive acoustic radiation force excitation from a linear array transducer. These FEM models were validated with calibrated phantoms. Shear wave speed, and therefore elasticity, dictates tissue relaxation following ARFI excitation, but Poisson’s ratio and density do not significantly alter tissue relaxation rates. Increased acoustic attenuation in tissue increases the relative amount of tissue displacement in the near field compared with the focal depth, but relaxation rates are not altered. Applications of this model include improving image quality, and distilling material and structural information from tissue’s dynamic response to ARFI excitation. Future work on these models includes incorporation of viscous material properties and modeling the ultrasonic tracking of displaced scatterers. PMID:16382621
Field size dependent mapping of medical linear accelerator radiation leakage
NASA Astrophysics Data System (ADS)
Vũ Bezin, Jérémi; Veres, Attila; Lefkopoulos, Dimitri; Chavaudra, Jean; Deutsch, Eric; de Vathaire, Florent; Diallo, Ibrahima
2015-03-01
The purpose of this study was to investigate the suitability of a graphics library based model for the assessment of linear accelerator radiation leakage. Transmission through the shielding elements was evaluated using the build-up factor corrected exponential attenuation law and the contribution from the electron guide was estimated using the approximation of a linear isotropic radioactive source. Model parameters were estimated by a fitting series of thermoluminescent dosimeter leakage measurements, achieved up to 100 cm from the beam central axis along three directions. The distribution of leakage data at the patient plane reflected the architecture of the shielding elements. Thus, the maximum leakage dose was found under the collimator when only one jaw shielded the primary beam and was about 0.08% of the dose at isocentre. Overall, we observe that the main contributor to leakage dose according to our model was the electron beam guide. Concerning the discrepancies between the measurements used to calibrate the model and the calculations from the model, the average difference was about 7%. Finally, graphics library modelling is a readily and suitable way to estimate leakage dose distribution on a personal computer. Such data could be useful for dosimetric evaluations in late effect studies.
Multiple-Objective Stepwise Calibration Using Luca
Hay, Lauren E.; Umemoto, Makiko
2007-01-01
This report documents Luca (Let us calibrate), a multiple-objective, stepwise, automated procedure for hydrologic model calibration and the associated graphical user interface (GUI). Luca is a wizard-style user-friendly GUI that provides an easy systematic way of building and executing a calibration procedure. The calibration procedure uses the Shuffled Complex Evolution global search algorithm to calibrate any model compiled with the U.S. Geological Survey's Modular Modeling System. This process assures that intermediate and final states of the model are simulated consistently with measured values.
NASA Technical Reports Server (NTRS)
Clayton, J. Louie; Phelps, Lisa (Technical Monitor)
2001-01-01
Carbon Fiber Rope (CFR) thermal barrier systems are being considered for use in several RSRM (Reusable Solid Rocket Motor) nozzle joints as a replacement for the current assembly gap close-out process/design. This study provides for development and test verification of analysis methods used for flow-thermal modeling of a CFR thermal barrier subject to fault conditions such as rope combustion gas blow-by and CFR splice failure. Global model development is based on a 1-D (one dimensional) transient volume filling approach where the flow conditions are calculated as a function of internal 'pipe' and porous media 'Darcy' flow correlations. Combustion gas flow rates are calculated for the CFR on a per-linear inch basis and solved simultaneously with a detailed thermal-gas dynamic model of a local region of gas blow by (or splice fault). Effects of gas compressibility, friction and heat transfer are accounted for the model. Computational Fluid Dynamic (CFD) solutions of the fault regions are used to characterize the local flow field, quantify the amount of free jet spreading and assist in the determination of impingement film coefficients on the nozzle housings. Gas to wall heat transfer is simulated by a large thermal finite element grid of the local structure. The employed numerical technique loosely couples the FE (Finite Element) solution with the gas dynamics solution of the faulted region. All free constants that appear in the governing equations are calibrated by hot fire sub-scale test. The calibrated model is used to make flight predictions using motor aft end environments and timelines. Model results indicate that CFR barrier systems provide a near 'vented joint' style of pressurization. Hypothetical fault conditions considered in this study (blow by, splice defect) are relatively benign in terms of overall heating to nozzle metal housing structures.
NASA Astrophysics Data System (ADS)
Gabi, Yasmine; Martins, Olivier; Wolter, Bernd; Strass, Benjamin
2018-04-01
The paper considers the Rockwell hardness investigation by finite element simulation in inspection situation of press hardened parts using the 3MA non-destructive testing system. The FEM model is based on robust strategy calculation which manages the issues of geometry and the time multiscale, as well as the local nonlinear hysteresis behavior of ferromagnetic materials. 3MA simulations are performed at high level operating point in order to saturate the soft microscopic surface soft layer of press hardened steel and access mainly to the bulk properties. 3MA measurements are validated by comparison with numerical simulations. Based on the simulation outputs, a virtual calibration is run. This result constitutes the first validation; the simulated calibration is in agreement with the conventional experimental data. As an outstanding highlight a correlation between magnetic quantities and hardness can be described via FEM simulated signals and shows high accuracy to the measured results.
NASA Technical Reports Server (NTRS)
Verma, Savita; Lee, Hanbong; Dulchinos, Victoria L.; Martin, Lynne; Stevens, Lindsay; Jung, Yoon; Chevalley, Eric; Jobe, Kim; Parke, Bonny
2017-01-01
NASA has been working with the FAA and aviation industry partners to develop and demonstrate new concepts and technologies that integrate arrival, departure, and surface traffic management capabilities. In March 2017, NASA conducted a human-in-the-loop (HITL) simulation for integrated surface and airspace operations, modeling Charlotte Douglas International Airport, to evaluate the operational procedures and information requirements for the tactical surface metering tool, and data exchange elements between the airline controlled ramp and ATC Tower. In this paper, we focus on the calibration of the tactical surface metering tool using various metrics measured from the HITL simulation results. Key performance metrics include gate hold times from pushback advisories, taxi-in-out times, runway throughput, and departure queue size. Subjective metrics presented in this paper include workload, situational awareness, and acceptability of the metering tool and its calibration.
NASA Technical Reports Server (NTRS)
Verma, Savita; Lee, Hanbong; Martin, Lynne; Stevens, Lindsay; Jung, Yoon; Dulchinos, Victoria; Chevalley, Eric; Jobe, Kim; Parke, Bonny
2017-01-01
NASA has been working with the FAA and aviation industry partners to develop and demonstrate new concepts and technologies that integrate arrival, departure, and surface traffic management capabilities. In March 2017, NASA conducted a human-in-the-loop (HITL) simulation for integrated surface and airspace operations, modeling Charlotte Douglas International Airport, to evaluate the operational procedures and information requirements for the tactical surface metering tool, and data exchange elements between the airline controlled ramp and ATC Tower. In this paper, we focus on the calibration of the tactical surface metering tool using various metrics measured from the HITL simulation results. Key performance metrics include gate hold times from pushback advisories, taxi-in/out times, runway throughput, and departure queue size. Subjective metrics presented in this paper include workload, situational awareness, and acceptability of the metering tool and its calibration
Modeling nuclear field shift isotope fractionation in crystals
NASA Astrophysics Data System (ADS)
Schauble, E. A.
2013-12-01
In this study nuclear field shift fractionations in solids (and chemically similar liquids) are estimated using calibrated density functional theory calculations. The nuclear field shift effect is a potential driver of mass independent isotope fractionation(1,2), especially for elements with high atomic number such as Hg, Tl and U. This effect is caused by the different shapes and volumes of isotopic nuclei, and their interactions with electronic structures and energies. Nuclear field shift isotope fractionations can be estimated with first principles methods, but the calculations are computationally difficult, limiting most theoretical studies so far to small gas-phase molecules and molecular clusters. Many natural materials of interest are more complex, and it is important to develop ways to estimate field shift effects that can be applied to minerals, solutions, in biomolecules, and at mineral-solution interfaces. Plane-wave density functional theory, in combination with the projector augmented wave method (DFT-PAW), is much more readily adapted to complex materials than the relativistic all-electron calculations that have been the focus of most previous studies. DFT-PAW is a particularly effective tool for studying crystals with periodic boundary conditions, and may also be incorporated into molecular dynamics simulations of solutions and other disordered phases. Initial calibrations of DFT-PAW calculations against high-level all-electron models of field shift fractionation suggest that there may be broad applicability of this method to a variety of elements and types of materials. In addition, the close relationship between the isomer shift of Mössbauer spectroscopy and the nuclear field shift isotope effect makes it possible, at least in principle, to estimate the volume component of field shift fractionations in some species that are too complex even for DFT-PAW models, so long as there is a Mössbauer isotope for the element of interest. Initial results will be presented for calculations of liquid-vapor fractionation of cadmium and mercury, which indicate an affinity for heavy isotopes in the liquid phase. In the case of mercury the results match well with recent experiments. Mössbauer-calibrated fractionation factors will also be presented for tin and platinum species. Platinum isotope behaviour in metals appears to particularly interesting, with very distinct isotope partitioning behaviour for iron-rich alloys, relative to pure platinum metal. References: 1) Bigeleisen, J. (1996) J. Am. Chem. Soc. 118, 3676-3680. 2) Nomura, M., Higuchi, N., Fujii, Y. (1996) J. Am. Chem. Soc. 118, 9127-9130.
Odegård, M; Mansfeld, J; Dundas, S H
2001-08-01
Calibration materials for microanalysis of Ti minerals have been prepared by direct fusion of synthetic and natural materials by resistance heating in high-purity graphite electrodes. Synthetic materials were FeTiO3 and TiO2 reagents doped with minor and trace elements; CRMs for ilmenite, rutile, and a Ti-rich magnetite were used as natural materials. Problems occurred during fusion of Fe2O3-rich materials, because at atmospheric pressure Fe2O3 decomposes into Fe3O4 and O2 at 1462 degrees C. An alternative fusion technique under pressure was tested, but the resulting materials were characterized by extensive segregation and development of separate phases. Fe2O3-rich materials were therefore fused below this temperature, resulting in a form of sintering, without conversion of the materials into amorphous glasses. The fused materials were studied by optical microscopy and EPMA, and tested as calibration materials by inductively coupled plasma mass spectrometry, equipped with laser ablation for sample introduction (LA-ICP-MS). It was demonstrated that calibration curves based on materials of rutile composition, within normal analytical uncertainty, generally coincide with calibration curves based on materials of ilmenite composition. It is, therefore, concluded that LA-ICP-MS analysis of Ti minerals can with advantage be based exclusively on calibration materials prepared for rutile, thereby avoiding the special fusion problems related to oxide mixtures of ilmenite composition. It is documented that sintered materials were in good overall agreement with homogeneous glass materials, an observation that indicates that in other situations also sintered mineral concentrates might be a useful alternative for instrument calibration, e.g. as alternative to pressed powders.
Model of dissolution in the framework of tissue engineering and drug delivery.
Sanz-Herrera, J A; Soria, L; Reina-Romo, E; Torres, Y; Boccaccini, A R
2018-05-22
Dissolution phenomena are ubiquitously present in biomaterials in many different fields. Despite the advantages of simulation-based design of biomaterials in medical applications, additional efforts are needed to derive reliable models which describe the process of dissolution. A phenomenologically based model, available for simulation of dissolution in biomaterials, is introduced in this paper. The model turns into a set of reaction-diffusion equations implemented in a finite element numerical framework. First, a parametric analysis is conducted in order to explore the role of model parameters on the overall dissolution process. Then, the model is calibrated and validated versus a straightforward but rigorous experimental setup. Results show that the mathematical model macroscopically reproduces the main physicochemical phenomena that take place in the tests, corroborating its usefulness for design of biomaterials in the tissue engineering and drug delivery research areas.
Aircraft electric field measurements: Calibration and ambient field retrieval
NASA Technical Reports Server (NTRS)
Koshak, William J.; Bailey, Jeff; Christian, Hugh J.; Mach, Douglas M.
1994-01-01
An aircraft locally distorts the ambient thundercloud electric field. In order to determine the field in the absence of the aircraft, an aircraft calibration is required. In this work a matrix inversion method is introduced for calibrating an aircraft equipped with four or more electric field sensors and a high-voltage corona point that is capable of charging the aircraft. An analytic, closed form solution for the estimate of a (3 x 3) aircraft calibration matrix is derived, and an absolute calibration experiment is used to improve the relative magnitudes of the elements of this matrix. To demonstrate the calibration procedure, we analyze actual calibration date derived from a Lear jet 28/29 that was equipped with five shutter-type field mill sensors (each with sensitivities of better than 1 V/m) located on the top, bottom, port, starboard, and aft positions. As a test of the calibration method, we analyze computer-simulated calibration data (derived from known aircraft and ambient fields) and explicitly determine the errors involved in deriving the variety of calibration matrices. We extend our formalism to arrive at an analytic solution for the ambient field, and again carry all errors explicitly.
Calibration of a pitot-static rake
NASA Technical Reports Server (NTRS)
Stump, H. P.
1977-01-01
A five-element pitot-static rake was tested to confirm its accuracy and determine its suitability for use at Langley during low-speed tunnel calibration primarily at full-scale tunnel. The rake was tested at one airspeed of 74 miles per hour (33 meters per second) and at pitch and yaw angles of 0 to + or - 20 degrees in 4 deg increments.
Calibration Development for an Unsteady Two-Strut Store Balance
NASA Astrophysics Data System (ADS)
Schmit, Ryan; Maatz, Ian; Johnson, Rudy
2017-11-01
This paper addresses measurements of unsteady store forces and moment in and around a weapons bay cavity. The cavity dimensions are: Length 8.5 inches, Depth 1.5 inches, Width 2.5 with a L/D ratio of 5.67. Test conditions are at Mach 0.7 and 1.5 with Re # 2.0e6/ft. The 7.2 inches long aluminum store is held in the cavity with two struts and the strut lengths are varied to move the store to different cavity depth locations. The normal forces and pitching moments are measured with two miniature 25 pound load cells with a natural frequency of 24k. The store-strut-load cell balance can also produce unwanted structural eigenfrequencies at or near the cavity's Rossiter tones. To move the eigenfrequencies away from the cavity's Rossiter tones calls for detailed design and Finite Element Modeling (FEM) before wind tunnel testing. Included are the issues in developing a calibration method for an unsteady two-strut store balance for use inside a scaled wind tunnel weapons bay cavity model.
NASA Technical Reports Server (NTRS)
Righter, K.; Pando, K.M.; Danielson, L.
2009-01-01
Shergottites have high S contents (1300 to 4600 ppm; [1]), but it is unclear if they are sulfide saturated or under-saturated. This issue has fundamental implications for determining the long term S budget of the martian surface and atmosphere (from mantle degassing), as well as evolution of the highly siderophile elements (HSE) Au, Pd, Pt, Re, Rh, Ru, Ir, and Os, since concentrations of the latter are controlled by sulfide stability. Resolution of sulfide saturation depends upon temperature, pressure, oxygen fugacity (and FeO), and magma composition [2]. Expressions derived from experimental studies allow prediction of S contents, though so far they are not calibrated for shergottitic liquids [3-5]. We have carried out new experiments designed to test current S saturation models, and then show that existing calibrations are not suitable for high FeO and low Al2O3 compositions characteristic of shergottitic liquids. The new results show that existing models underpredict S contents of sulfide saturated shergottitic liquids by a factor of 2.
NASA Astrophysics Data System (ADS)
Karssenberg, D.; Wanders, N.; de Roo, A.; de Jong, S.; Bierkens, M. F.
2013-12-01
Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide the closest thing to a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? 2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to approaches that calibrate only with discharge, such that this leads to improved forecasts of soil moisture content and discharge as well? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper Danube area. Calibration is done with discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS and ASCAT. Four scenarios are studied: no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data (three satellites) and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated for the calibrated model parameters on a validation period of 10 years. Results show that calibration with discharge data improves the estimation of groundwater parameters (e.g., groundwater reservoir constant) and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate calibration of parameters related to land surface process (e.g., the saturated conductivity of the soil), which is not possible when calibrating on discharge alone. For the upstream area up to 40000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30 % in the RMSE for discharge simulations, compared to calibration on discharge alone. For discharge in the downstream area, the model performance due to assimilation of remotely sensed soil moisture is not increased or slightly decreased, most probably due to the longer relative importance of the routing and contribution of groundwater in downstream areas. When microwave soil moisture is used for calibration the RMSE of soil moisture simulations decreases from 0.072 m3m-3 to 0.062 m3m-3. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models leading to a better simulation of soil moisture content throughout and a better simulation of discharge in upstream areas, particularly if discharge observations are sparse.
SWAT: Model use, calibration, and validation
USDA-ARS?s Scientific Manuscript database
SWAT (Soil and Water Assessment Tool) is a comprehensive, semi-distributed river basin model that requires a large number of input parameters which complicates model parameterization and calibration. Several calibration techniques have been developed for SWAT including manual calibration procedures...
NASA Technical Reports Server (NTRS)
Hulka, J. R.; Jones, G. W.
2010-01-01
Liquid rocket engines using oxygen and methane propellants are being considered by the National Aeronautics and Space Administration (NASA) for in-space vehicles. This propellant combination has not been previously used in a flight-qualified engine system, so limited test data and analysis results are available at this stage of early development. NASA has funded several hardware-oriented activities with oxygen and methane propellants over the past several years with the Propulsion and Cryogenic Advanced Development (PCAD) project, under the Exploration Technology Development Program. As part of this effort, the NASA Marshall Space Flight Center has conducted combustion, performance, and combustion stability analyses of several of the configurations. This paper summarizes the analyses of combustion and performance as a follow-up to a paper published in the 2008 JANNAF/LPS meeting. Combustion stability analyses are presented in a separate paper. The current paper includes test and analysis results of coaxial element injectors using liquid oxygen and liquid methane or gaseous methane propellants. Several thrust chamber configurations have been modeled, including thrust chambers with multi-element swirl coax element injectors tested at the NASA MSFC, and a uni-element chamber with shear and swirl coax injectors tested at The Pennsylvania State University. Configurations were modeled with two one-dimensional liquid rocket combustion analysis codes, the Rocket Combustor Interaction Design and Analysis (ROCCID), and the Coaxial Injector Combustion Model (CICM). Significant effort was applied to show how these codes can be used to model combustion and performance with oxygen/methane propellants a priori, and what anchoring or calibrating features need to be applied or developed in the future. This paper describes the test hardware configurations, presents the results of all the analyses, and compares the results from the two analytical methods
NASA Astrophysics Data System (ADS)
Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.
2014-05-01
Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.
Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)
NASA Astrophysics Data System (ADS)
Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.
2013-12-01
This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.
Development of the Quality Assurance/Quality Control Procedures for a Neutron Interrogation System
NASA Astrophysics Data System (ADS)
Obhođaš, Jasmina; Sudac, Davorin; Valković, Vladivoj
2016-06-01
In order to perform Quality Assurance/Quality Control (QA/QC) procedures for a system dedicated to the neutron interrogation of objects for the presence of threat materials one needs to perform measurements of reference materials (RM) i.e. simulants having the same (or similar) atomic ratios as real materials. It is well known that explosives, drugs, and various other benign materials, contain chemical elements such as hydrogen, oxygen, carbon and nitrogen in distinctly different quantities. For example, a high carbon-to-oxygen ratio (C/O) is characteristic of drugs. Explosives can be differentiated by measurement of both (C/O) and nitrogen-to-oxygen (N/O) ratios. The C/N ratio of the chemical warfare agents, coupled with the measurement of elements such as fluorine and phosphorus, clearly differentiate them from the conventional explosives. Here we present the RM preparation, calibration procedure and correlations attained between theoretical values and experimentally obtained results in laboratory conditions for C/O and N/C ratios of prepared hexogen (RDX), TNT, DLM2, TATP, cocaine, heroin, yperite, tetranitromethane, peroxide methylethylketone, nitromethane and ethyleneglycol dinitrate simulants. We have shown that analyses of the gamma ray spectra by using simple unfolding model developed for this purpose gave a nice agreement with the chemical formula of created simulants, thus the calibration quality was successfully tested.
Simulation of crash tests for high impact levels of a new bridge safety barrier
NASA Astrophysics Data System (ADS)
Drozda, Jiří; Rotter, Tomáš
2017-09-01
The purpose is to show the opportunity of a non-linear dynamic impact simulation and to explain the possibility of using finite element method (FEM) for developing new designs of safety barriers. The main challenge is to determine the means to create and validate the finite element (FE) model. The results of accurate impact simulations can help to reduce necessary costs for developing of a new safety barrier. The introductory part deals with the creation of the FE model, which includes the newly-designed safety barrier and focuses on the application of an experimental modal analysis (EMA). The FE model has been created in ANSYS Workbench and is formed from shell and solid elements. The experimental modal analysis, which was performed on a real pattern, was employed for measuring the modal frequencies and shapes. After performing the EMA, the FE mesh was calibrated after comparing the measured modal frequencies with the calculated ones. The last part describes the process of the numerical non-linear dynamic impact simulation in LS-DYNA. This simulation was validated after comparing the measured ASI index with the calculated ones. The aim of the study is to improve professional public knowledge about dynamic non-linear impact simulations. This should ideally lead to safer, more accurate and profitable designs.
Ferragina, A; Cipolat-Gotet, C; Cecchinato, A; Pazzola, M; Dettori, M L; Vacca, G M; Bittante, G
2017-05-01
The aim of this study was to apply Bayesian models to the Fourier-transform infrared spectroscopy spectra of individual sheep milk samples to derive calibration equations to predict traditional and modeled milk coagulation properties (MCP), and to assess the repeatability of MCP measures and their predictions. Data consisted of 1,002 individual milk samples collected from Sarda ewes reared in 22 farms in the region of Sardinia (Italy) for which MCP and modeled curd-firming parameters were available. Two milk samples were taken from 87 ewes and analyzed with the aim of estimating repeatability, whereas a single sample was taken from the other 915 ewes. Therefore, a total of 1,089 analyses were performed. For each sample, 2 spectra in the infrared region 5,011 to 925 cm -1 were available and averaged before data analysis. BayesB models were used to calibrate equations for each of the traits. Prediction accuracy was estimated for each trait and model using 20 replicates of a training-testing validation procedure. The repeatability of MCP measures and their predictions were also compared. The correlations between measured and predicted traits, in the external validation, were always higher than 0.5 (0.88 for rennet coagulation time). We confirmed that the most important element for finding the prediction accuracy is the repeatability of the gold standard analyses used for building calibration equations. Repeatability measures of the predicted traits were generally high (≥95%), even for those traits with moderate analytical repeatability. Our results show that Bayesian models applied to Fourier-transform infrared spectra are powerful tools for cheap and rapid prediction of important traits in ovine milk and, compared with other methods, could help in the interpretation of results. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Hondrogiannis, Ellen M; Ehrlinger, Erin; Poplaski, Alyssa; Lisle, Meredith
2013-11-27
A total of 11 elements found in 25 vanilla samples from Uganda, Madagascar, Indonesia, and Papua New Guinea were measured by laser ablation-inductively coupled plasma-time-of-flight-mass spectrometry (LA-ICP-TOF-MS) for the purpose of collecting data that could be used to discriminate among the origins. Pellets were prepared of the samples, and elemental concentrations were obtained on the basis of external calibration curves created using five National Institute of Standards and Technology (NIST) standards and one Chinese standard with (13)C internal standardization. These curves were validated using NIST 1573a (tomato leaves) as a check standard. Discriminant analysis was used to successfully classify the vanilla samples by their origin. Our method illustrates the feasibility of using LA-ICP-TOF-MS with an external calibration curve for high-throughput screening of spice screening analysis.
Bhandari, Ammar B; Nelson, Nathan O; Sweeney, Daniel W; Baffaut, Claire; Lory, John A; Senaviratne, Anomaa; Pierzynski, Gary M; Janssen, Keith A; Barnes, Philip L
2017-11-01
Process-based computer models have been proposed as a tool to generate data for Phosphorus (P) Index assessment and development. Although models are commonly used to simulate P loss from agriculture using managements that are different from the calibration data, this use of models has not been fully tested. The objective of this study is to determine if the Agricultural Policy Environmental eXtender (APEX) model can accurately simulate runoff, sediment, total P, and dissolved P loss from 0.4 to 1.5 ha of agricultural fields with managements that are different from the calibration data. The APEX model was calibrated with field-scale data from eight different managements at two locations (management-specific models). The calibrated models were then validated, either with the same management used for calibration or with different managements. Location models were also developed by calibrating APEX with data from all managements. The management-specific models resulted in satisfactory performance when used to simulate runoff, total P, and dissolved P within their respective systems, with > 0.50, Nash-Sutcliffe efficiency > 0.30, and percent bias within ±35% for runoff and ±70% for total and dissolved P. When applied outside the calibration management, the management-specific models only met the minimum performance criteria in one-third of the tests. The location models had better model performance when applied across all managements compared with management-specific models. Our results suggest that models only be applied within the managements used for calibration and that data be included from multiple management systems for calibration when using models to assess management effects on P loss or evaluate P Indices. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Acoustic-optic spectrometer. 1: Noise contributions and system consideration
NASA Technical Reports Server (NTRS)
Chin, G.
1984-01-01
An acousto-optic spectrometer (AOS) used as an IF spectrometer to a heterodyne receiver is modeled as a total power multi-channel integrating receiver. Systematic noise contributions common to all total power, time integrating receivers, as well as noise terms unique to the use of optical elements and photo-detectors in an AOS are identified and discussed. In addition, degradation of signal-to-noise ratio of an unbalanced Dicke receiver compared to a balanced Dicke receiver is found to be due to gain calibration processing and is not an instrumental effect.
Using Satellite Data for Environmental Impact Analysis in Economic Growth: the Case of Mongolia
NASA Astrophysics Data System (ADS)
Tungalag, A.; Tsolmon, R.; Ochirkhuyag, L.; Oyunjargal, J.
2016-06-01
The Mongolian economy is based on the primary and secondary economic sectors of agriculture and industry. In addition, minerals and mining become a key sector of its economy. The main mining resources are gold, copper, coal, fluorspar and steel. However, the environment and green economy is one of the big problems among most of the countries and especially for countries like Mongolia where the mining is major part of economy; it is a number one problem. The research of the work tested how environmental elements effect to current Mongolian economic growth, which is growing economy because of mining sector. The study of economic growth but the starting point for any study of economic growth is the neoclassical growth model emphasizing the role of capital accumulation. The growth is analysed either in terms of models with exogenous saving rates (the Solow-Swan model), or models where consumption and hence savings are determined by optimizing individuals. These are the so-called optimal growth or Ramsey-Cass-Koopmans. The study extends the Solow model and the Ramsey-Cass-Koopmans model, including environmental elements which are satellite data determine to degraded land and vegetation value from 1995 to 2013. In contrast, we can see the degraded land area increases from 1995 (4856 m2) to 2013 (10478 m2) and vegetation value decrease at same time. A description of the methodology of the study conducted follows together with the data collected and econometric estimations and calibration with environmental elements.
Calibration of the hard x-ray detectors for the FOXSI solar sounding rocket
NASA Astrophysics Data System (ADS)
Athiray, P. S.; Buitrago-Casas, Juan Camilo; Bergstedt, Kendra; Vievering, Juliana; Musset, Sophie; Ishikawa, Shin-nosuke; Glesener, Lindsay; Takahashi, Tadayuki; Watanabe, Shin; Courtade, Sasha; Christe, Steven; Krucker, Säm.; Goetz, Keith; Monson, Steven
2017-08-01
The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket experiment conducts direct imaging and spectral observation of the Sun in hard X-rays, in the energy range 4 to 20 keV. These high-sensitivity observations are used to study particle acceleration and coronal heating. FOXSI is designed with seven grazing incidence optics modules that focus X-rays onto seven focal plane detectors kept at a 2m distance. FOXSI-1 was flown with seven Double-sided Si Strip Detectors (DSSD), and two of them were replaced with CdTe detectors for FOXSI-2. The upcoming FOXSI-3 flight will carry DSSD and CdTe detectors with upgraded optics for enhanced sensitivity. The detectors are calibrated using various radioactive sources. The detector's spectral response matrix was constructed with diagonal elements using a Gaussian approximation with a spread (sigma) that accounts for the energy resolution of the detector. Spectroscopic studies of past FOXSI flight data suggest that the inclusion of lower energy X-rays could better constrain the spectral modeling to yield a more precise temperature estimation of the hot plasma. This motivates us to carry out an improved calibration to better understand the finer-order effects on the spectral response, especially at lower energies. Here we report our improved calibration of FOXSI detectors using experiments and Monte-Carlo simulations.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
NASA Astrophysics Data System (ADS)
Houtz, Derek A.; Emery, William; Gu, Dazhen; Jacob, Karl; Murk, Axel; Walker, David K.; Wylde, Richard J.
2017-08-01
A conical cavity has been designed and fabricated for use as a broadband passive microwave calibration source, or blackbody, at the National Institute of Standards and Technology. The blackbody will be used as a national primary standard for brightness temperature and will allow for the prelaunch calibration of spaceborne radiometers and calibration of ground-based systems to provide traceability among radiometric data. The conical geometry provides performance independent of polarization, minimizing reflections, and standing waves, thus having a high microwave emissivity. The conical blackbody has advantages over typical pyramidal array geometries, including reduced temperature gradients and excellent broadband electromagnetic performance over more than a frequency decade. The blackbody is designed for use between 18 and 230 GHz, at temperatures between 80 and 350 K, and is vacuum compatible. To approximate theoretical blackbody behavior, the design maximizes emissivity and thus minimizes reflectivity. A newly developed microwave absorber is demonstrated that uses cryogenically compatible, thermally conductive two-part epoxy with magnetic carbonyl iron (CBI) powder loading. We measured the complex permittivity and permeability properties for different CBI-loading percentages; the conical absorber is then designed and optimized with geometric optics and finite-element modeling, and finally, the reflectivity of the resulting fabricated structure is measured. We demonstrated normal incidence reflectivity considerably below -40 dB at all relevant remote sensing frequencies.
NASA Astrophysics Data System (ADS)
Chatterjee, Abhijit; Verma, Anurag
2016-05-01
The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.
Quantitative aspects of inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Bulska, Ewa; Wagner, Barbara
2016-10-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.
Sandia fracture challenge 2: Sandia California's modeling approach
Karlson, Kyle N.; James W. Foulk, III; Brown, Arthur A.; ...
2016-03-09
The second Sandia Fracture Challenge illustrates that predicting the ductile fracture of Ti-6Al-4V subjected to moderate and elevated rates of loading requires thermomechanical coupling, elasto-thermo-poro-viscoplastic constitutive models with the physics of anisotropy and regularized numerical methods for crack initiation and propagation. We detail our initial approach with an emphasis on iterative calibration and systematically increasing complexity to accommodate anisotropy in the context of an isotropic material model. Blind predictions illustrate strengths and weaknesses of our initial approach. We then revisit our findings to illustrate the importance of including anisotropy in the failure process. Furthermore, mesh-independent solutions of continuum damage modelsmore » having both isotropic and anisotropic yields surfaces are obtained through nonlocality and localization elements.« less
Finite Element-Based Mechanical Assessment of Bone Quality on the Basis of In Vivo Images.
Pahr, Dieter H; Zysset, Philippe K
2016-12-01
Beyond bone mineral density (BMD), bone quality designates the mechanical integrity of bone tissue. In vivo images based on X-ray attenuation, such as CT reconstructions, provide size, shape, and local BMD distribution and may be exploited as input for finite element analysis (FEA) to assess bone fragility. Further key input parameters of FEA are the material properties of bone tissue. This review discusses the main determinants of bone mechanical properties and emphasizes the added value, as well as the important assumptions underlying finite element analysis. Bone tissue is a sophisticated, multiscale composite material that undergoes remodeling but exhibits a rather narrow band of tissue mineralization. Mechanically, bone tissue behaves elastically under physiologic loads and yields by cracking beyond critical strain levels. Through adequate cell-orchestrated modeling, trabecular bone tunes its mechanical properties by volume fraction and fabric. With proper calibration, these mechanical properties may be incorporated in quantitative CT-based finite element analysis that has been validated extensively with ex vivo experiments and has been applied increasingly in clinical trials to assess treatment efficacy against osteoporosis.
Coupled hydrogeological and geomechanical modelling for the analysis of large slope instabilities.
NASA Astrophysics Data System (ADS)
Laloui, Lyesse; Ferrari, Alessio; Bonnard, Christophe
2010-05-01
Slowly-moving landslides (average velocity between 2 and 10 cm/year) are quite frequent in mountainous or hilly areas and they may display occasional crises, generally due to exceptional climatic conditions. The hazard related to these events cannot be analysed in terms of probability analysis, as the number of recorded past events is generally very small and climate changes could significantly modify the environmental setting. Quantitative relationships relating climatic condition fluctuations and sliding area velocity must then be pursued by taking into account the most relevant physical processes involved in the landslide behaviours. Conventional stability analyses are unable to deal with such questions because they do not allow the velocity fields to be determined. With regard to the behaviour of large slope instabilities, a methodology is presented which aims to describe the behaviour of slow-moving landslides by means of a coupled hydrogeological and geomechanical modelling framework. As it is well known, the evolution of the pore water pressure within the landslide body is often recognized as the main cause for the occurrence of displacement accelerations. In this sense the interaction among the hydrological and the mechanical responses must be considered to analyse the landslide behaviour, with the aim of quantitatively relating pore water pressure variations and movements. For a given case study, pore water pressure evolutions in space and time are obtained from a duly calibrated finite element hydrogeological model, which can take into account the role of several key factors such as infiltration, preferential flows and vegetation. Computed groundwater pressures resulting from the hydrogeological simulations are introduced as nodal forces in a finite element geomechanical model in order to calculate stress evolutions and displacements. The use of advanced constitutive models based on the generalised effective stress concept allows taking into account specific behavioural features such as the effects of the changes in the degree of saturation, associated to the fluctuation of the groundwater level. The geomechanical model is calibrated comparing computed and measured displacements in relevant points of the slope. When appropriate, the outcomes from the geomechanical model can be used in an iterative way to update the hydrogeological model settings. In this way it is possible to simulate the evolution of critical factors (such as permeability or retention properties of the involved materials) associated to the cumulated displacements. Once calibrated, the coupled models can be used to assess the landslide behaviour under different scenarios, including modified climatic conditions and the implementation of mitigation measures. Applications to relevant case studies are presented in order to demonstrate the adequacy and the usefulness of the proposed modelling framework.
Modeling and Simulation of Nanoindentation
NASA Astrophysics Data System (ADS)
Huang, Sixie; Zhou, Caizhi
2017-11-01
Nanoindentation is a hardness test method applied to small volumes of material which can provide some unique effects and spark many related research activities. To fully understand the phenomena observed during nanoindentation tests, modeling and simulation methods have been developed to predict the mechanical response of materials during nanoindentation. However, challenges remain with those computational approaches, because of their length scale, predictive capability, and accuracy. This article reviews recent progress and challenges for modeling and simulation of nanoindentation, including an overview of molecular dynamics, the quasicontinuum method, discrete dislocation dynamics, and the crystal plasticity finite element method, and discusses how to integrate multiscale modeling approaches seamlessly with experimental studies to understand the length-scale effects and microstructure evolution during nanoindentation tests, creating a unique opportunity to establish new calibration procedures for the nanoindentation technique.
Agogo, George O.; van der Voet, Hilko; Veer, Pieter van’t; Ferrari, Pietro; Leenders, Max; Muller, David C.; Sánchez-Cantalejo, Emilio; Bamia, Christina; Braaten, Tonje; Knüppel, Sven; Johansson, Ingegerd; van Eeuwijk, Fred A.; Boshuizen, Hendriek
2014-01-01
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model. PMID:25402487
Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures
Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P.; Alamdari, Houshang
2016-01-01
Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger’s model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger’s model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297–0.595 mm (−30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch. PMID:28773459
Discrete Element Method Modeling of the Rheological Properties of Coke/Pitch Mixtures.
Majidi, Behzad; Taghavi, Seyed Mohammad; Fafard, Mario; Ziegler, Donald P; Alamdari, Houshang
2016-05-04
Rheological properties of pitch and pitch/coke mixtures at temperatures around 150 °C are of great interest for the carbon anode manufacturing process in the aluminum industry. In the present work, a cohesive viscoelastic contact model based on Burger's model is developed using the discrete element method (DEM) on the YADE, the open-source DEM software. A dynamic shear rheometer (DSR) is used to measure the viscoelastic properties of pitch at 150 °C. The experimental data obtained is then used to estimate the Burger's model parameters and calibrate the DEM model. The DSR tests were then simulated by a three-dimensional model. Very good agreement was observed between the experimental data and simulation results. Coke aggregates were modeled by overlapping spheres in the DEM model. Coke/pitch mixtures were numerically created by adding 5, 10, 20, and 30 percent of coke aggregates of the size range of 0.297-0.595 mm (-30 + 50 mesh) to pitch. Adding up to 30% of coke aggregates to pitch can increase its complex shear modulus at 60 Hz from 273 Pa to 1557 Pa. Results also showed that adding coke particles increases both storage and loss moduli, while it does not have a meaningful effect on the phase angle of pitch.
Ruan, J S; Prasad, P
1995-08-01
A skull-brain finite element model of the human head has been coupled with a multilink rigid body model of the Hybrid III dummy. The experimental coupled model is intended to represent anatomically a 50th percentile human to the extent the dummy and the skull-brain model represent a human. It has been verified by simulating several human cadaver head impact tests as well as dummy head 'impacts" during barrier crashes in an automotive environment. Skull-isostress and brain-isostrain response curves were established based on model calibration of experimental human cadaver tolerance data. The skull-isostress response curve agrees with the JARI Human Head Impact Tolerance Curve for skull fracture. The brain-isostrain response curve predicts a higher G level for concussion than does the JARI concussion curve and the Wayne State Tolerance Curve at the longer time duration range. Barrier crash simulations consist of belted dummies impacting an airbag, a hard and soft steering wheel hub, and no head contact with vehicle interior components. Head impact force, intracranial pressures and strains, skull stress, and head center-of-gravity acceleration were investigated as injury parameters. Head injury criterion (HIC) was also calculated along with these parameters. Preliminary results of the model simulations in those impact conditions are discussed.
Lucio, Francesco; Calamia, Elisa; Russi, Elvio; Marchetto, Flavio
2013-01-01
When using an electronic portal imaging device (EPID) for dosimetric verifications, the calibration of the sensitive area is of paramount importance. Two calibration methods are generally adopted: one, empirical, based on an external reference dosimeter or on multiple narrow beam irradiations, and one based on the EPID response simulation. In this paper we present an alternative approach based on an intercalibration procedure, independent from external dosimeters and from simulations, and is quick and easy to perform. Each element of a detector matrix is characterized by a different gain; the aim of the calibration procedure is to relate the gain of each element to a reference one. The method that we used to compute the relative gains is based on recursive acquisitions with the EPID placed in different positions, assuming a constant fluence of the beam for subsequent deliveries. By applying an established procedure and analysis algorithm, the EPID calibration was repeated in several working conditions. Data show that both the photons energy and the presence of a medium between the source and the detector affect the calibration coefficients less than 1%. The calibration coefficients were then applied to the acquired images, comparing the EPID dose images with films. Measurements were performed with open field, placing the film at the level of the EPID. The standard deviation of the distribution of the point‐to‐point difference is 0.6%. An approach of this type for the EPID calibration has many advantages with respect to the standard methods — it does not need an external dosimeter, it is not related to the irradiation techniques, and it is easy to implement in the clinical practice. Moreover, it can be applied in case of transit or nontransit dosimetry, solving the problem of the EPID calibration independently from the dose reconstruction method. PACS number: 87.56.‐v PMID:24257285
NASA Astrophysics Data System (ADS)
Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang
2017-11-01
In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.
Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang
2017-11-01
In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.
NASA Astrophysics Data System (ADS)
Alaoui, G.; Leger, M.; Gagne, J.; Tremblay, L.
2009-05-01
The goal of this work was to evaluate the capability of infrared reflectance spectroscopy for a fast quantification of the elemental and molecular compositions of sedimentary and particulate organic matter (OM). A partial least-squares (PLS) regression model was used for analysis and values were compared to those obtained by traditional methods (i.e., elemental, humic and HPLC analyses). PLS tools are readily accessible from software such as GRAMS (Thermo-Fisher) used in spectroscopy. This spectroscopic-chemometric approach has several advantages including its rapidity and use of whole unaltered samples. To predict properties, a set of infrared spectra from representative samples must first be fitted to form a PLS calibration model. In this study, a large set (180) of sediments and particles on GFF filters from the St. Lawrence estuarine system were used. These samples are very heterogenous (e.g., various tributaries, terrigenous vs. marine, events such as landslides and floods) and thus represent a challenging test for PLS prediction. For sediments, the infrared spectra were obtained with a diffuse reflectance, or DRIFT, accessory. Sedimentary carbon, nitrogen, humic substance contents as well as humic substance proportions in OM and N:C ratios were predicted by PLS. The relative root mean square error of prediction (%RMSEP) for these properties were between 5.7% (humin content) and 14.1% (total humic substance yield) using the cross-validation, or leave-one out, approach. The %RMSEP calculated by PLS for carbon content was lower with the PLS model (7.6%) than with an external calibration method (11.7%) (Tremblay and Gagné, 2002, Anal. Chem., 74, 2985). Moreover, the PLS approach does not require the extraction of POM needed in external calibration. Results highlighted the importance of using a PLS calibration set representative of the unknown samples (e.g., same area). For filtered particles, the infrared spectra were obtained using a novel approach based on attenuated total reflectance, or ATR, allowing the direct analysis of the filters. In addition to carbon and nitrogen contents, amino acid and muramic acid (a bacterial biomarker) yields were predicted using PLS. Calculated %RMSEP varied from 6.4% (total amino acid content) to 18.6% (muramic acid content) with cross-validation. PLS regression modeling does not require a priori knowledge of the spectral bands associated with the properties to be predicted. In turn, the spectral regions that give good PLS predictions provided valuable information on band assignment and geochemical processes. For instance, nitrogen and humin contents were greatly determined by an absorption band caused by aluminosilicate OH group. This supports the idea that OM-clay interactions, important in humin formation and OM preservation, are mediated by nitrogen-containing groups.
NASA Astrophysics Data System (ADS)
Jackson-Blake, Leah; Helliwell, Rachel
2015-04-01
Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.
NASA Astrophysics Data System (ADS)
Minunno, Francesco; Peltoniemi, Mikko; Launiainen, Samuli; Mäkelä, Annikki
2014-05-01
Biogeochemical models quantify the material and energy flux exchanges between biosphere, atmosphere and soil, however there is still considerable uncertainty underpinning model structure and parametrization. The increasing availability of data from of multiple sources provides useful information for model calibration and validation at different space and time scales. We calibrated a simplified ecosystem process model PRELES to data from multiple sites. In this work we had the following objective: to compare a multi-site calibration and site-specific calibrations, in order to test if PRELES is a model of general applicability, and to test how well one parameterization can predict ecosystem fluxes. Model calibration and evaluation were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 9 sites of Finland and Sweden were used in the study; half dataset was used for model calibrations and half for the comparative analyses. 10 BCs were performed; the model was independently calibrated for each of the nine sites (site-specific calibrations) and a multi-site calibration was achieved using the data from all the sites in one BC. Then 9 BMCs were carried out, one for each site, using output from the multi-site and the site-specific versions of PRELES. Similar estimates were obtained for the parameters at which model outputs are most sensitive. Not surprisingly, the joint posterior distribution achieved through the multi-site calibration was characterized by lower uncertainty, because more data were involved in the calibration process. No significant differences were encountered in the prediction of the multi-site and site-specific versions of PRELES, and after BMC, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Despite being a simple model, PRELES provided good estimates of GPP and ET; only for one site PRELES multi-site version underestimated water fluxes. Our study implies convergence of GPP and water processes in boreal zone to the extent that their plausible prediction is possible with a simple model using global parameterization.
Calibration of a COTS Integration Cost Model Using Local Project Data
NASA Technical Reports Server (NTRS)
Boland, Dillard; Coon, Richard; Byers, Kathryn; Levitt, David
1997-01-01
The software measures and estimation techniques appropriate to a Commercial Off the Shelf (COTS) integration project differ from those commonly used for custom software development. Labor and schedule estimation tools that model COTS integration are available. Like all estimation tools, they must be calibrated with the organization's local project data. This paper describes the calibration of a commercial model using data collected by the Flight Dynamics Division (FDD) of the NASA Goddard Spaceflight Center (GSFC). The model calibrated is SLIM Release 4.0 from Quantitative Software Management (QSM). By adopting the SLIM reuse model and by treating configuration parameters as lines of code, we were able to establish a consistent calibration for COTS integration projects. The paper summarizes the metrics, the calibration process and results, and the validation of the calibration.
Peterson, Philip J. D.; Aujla, Amrita; Brundle, Alex G.; Thompson, Martin R.; Vande Hey, Josh; Leigh, Roland J.
2017-01-01
The potential of inexpensive Metal Oxide Semiconductor (MOS) gas sensors to be used for urban air quality monitoring has been the topic of increasing interest in the last decade. This paper discusses some of the lessons of three years of experience working with such sensors on a novel instrument platform (Small Open General purpose Sensor (SOGS)) in the measurement of atmospheric nitrogen dioxide and ozone concentrations. Analytic methods for increasing long-term accuracy of measurements are discussed, which permit nitrogen dioxide measurements with 95% confidence intervals of 20.0 μg m−3 and ozone precision of 26.8 μg m−3, for measurements over a period one month away from calibration, averaged over 18 months of such calibrations. Beyond four months from calibration, sensor drift becomes significant, and accuracy is significantly reduced. Successful calibration schemes are discussed with the use of controlled artificial atmospheres complementing deployment on a reference weather station exposed to the elements. Manufacturing variation in the attributes of individual sensors are examined, an experiment possible due to the instrument being equipped with pairs of sensors of the same kind. Good repeatability (better than 0.7 correlation) between individual sensor elements is shown. The results from sensors that used fans to push air past an internal sensor element are compared with mounting the sensors on the outside of the enclosure, the latter design increasing effective integration time to more than a day. Finally, possible paths forward are suggested for improving the reliability of this promising sensor technology for measuring pollution in an urban environment. PMID:28753910
Peterson, Philip J D; Aujla, Amrita; Grant, Kirsty H; Brundle, Alex G; Thompson, Martin R; Vande Hey, Josh; Leigh, Roland J
2017-07-19
The potential of inexpensive Metal Oxide Semiconductor (MOS) gas sensors to be used for urban air quality monitoring has been the topic of increasing interest in the last decade. This paper discusses some of the lessons of three years of experience working with such sensors on a novel instrument platform (Small Open General purpose Sensor (SOGS)) in the measurement of atmospheric nitrogen dioxide and ozone concentrations. Analytic methods for increasing long-term accuracy of measurements are discussed, which permit nitrogen dioxide measurements with 95% confidence intervals of 20.0 μ g m - 3 and ozone precision of 26.8 μ g m - 3 , for measurements over a period one month away from calibration, averaged over 18 months of such calibrations. Beyond four months from calibration, sensor drift becomes significant, and accuracy is significantly reduced. Successful calibration schemes are discussed with the use of controlled artificial atmospheres complementing deployment on a reference weather station exposed to the elements. Manufacturing variation in the attributes of individual sensors are examined, an experiment possible due to the instrument being equipped with pairs of sensors of the same kind. Good repeatability (better than 0.7 correlation) between individual sensor elements is shown. The results from sensors that used fans to push air past an internal sensor element are compared with mounting the sensors on the outside of the enclosure, the latter design increasing effective integration time to more than a day. Finally, possible paths forward are suggested for improving the reliability of this promising sensor technology for measuring pollution in an urban environment.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
NASA Astrophysics Data System (ADS)
Zecchetto, Stefano; De Biasio, Francesco; Umgiesser, Georg; Bajo, Marco; Vignudelli, Stefano; Papa, Alvise; Donlon, Craig; Bellafiore, Debora
2013-04-01
On the framework of the Data User Element (DUE) program, the European Space Agency is funding a project to use altimeter Total Water Level Envelope (TWLE) and scatterometer wind data to improve the storm surge forecasting in the Adriatic Sea and in the city of Venice. The project will: a) Select a number of Storm Surge Events occurred in the Venice lagoon in the period 1999-present day b) Provide the available satellite Earth Observation (EO) data related to the Storm Surge Events, mainly satellite winds and altimeter data, as well as all the available in-situ data and model forecasts c) Provide a demonstration Near Real Time service of EO data products and services in support of operational and experimental forecasting and warning services d) Run a number of re-analysis cases, both for historical and contemporary storm surge events, to demonstrate the usefulness of EO data The re-analysis experiments, based on hindcasts performed by the finite element 2-D oceanographic model SHYFEM (https://sites.google.com/site/shyfem/), will 1. use different forcing wind fields (calibrated and not calibrated with satellite wind data) 2. use Storm Surge Model initial conditions determined from altimeter TWLE data. The experience gained working with scatterometer and Numerical Weather Prediction (NWP) winds in the Adriatic Sea tells us that the bias NWP-Scatt wind is negative and spatially and temporally not uniform. In particular, a well established point is that the bias is higher close to coasts then offshore. Therefore, NWP wind speed calibration will be carried out on each single grid point in the Adriatic Sea domain over the period of a Storm Surge Event, taking into account of existing published methods. Point #2 considers two different methodologies to be used in re-analysis tests. One is based on the use of the TWLE values from altimeter data in the Storm Surge Model (SSM), applying data assimilation methodologies and trying to optimize the initial conditions of the simulation.The second possibility is an indirect exploitation of the TWLE data from altimeter in an ensemble-like framework, obtained by slight variations of the external forcing. In this case the wind data from NWP models will be weakly altered (shifted in phase), the drag coefficient will be modified, and the initial condition of the model slightly shifted in time to account for the uncertainty of these factors. This contribution will illustrate the geophysical context of work and outline the results.
Impact analyses for negative flexural responses (hogging) in railway prestressed concrete sleepers
NASA Astrophysics Data System (ADS)
Kaewunruen, S.; Ishida, T.; Remennikov, AM
2016-09-01
By nature, ballast interacts with railway concrete sleepers in order to provide bearing support to track system. Most train-track dynamic models do not consider the degradation of ballast over time. In fact, the ballast degradation causes differential settlement and impact forces acting on partial and unsupported tracks. Furthermore, localised ballast breakages underneath railseat increase the likelihood of centrebound cracks in concrete sleepers due to the unbalanced support under sleepers. This paper presents a dynamic finite element model of a standard-gauge concrete sleeper in a track system, taking into account the tensionless nature of ballast support. The finite element model was calibrated using static and dynamic responses in the past. In this paper, the effects of centre-bound ballast support on the impact behaviours of sleepers are highlighted. In addition, it is the first to demonstrate the dynamic effects of sleeper length on the dynamic design deficiency in concrete sleepers. The outcome of this study will benefit the rail maintenance criteria of track resurfacing in order to restore ballast profile and appropriate sleeper/ballast interaction.
Ultrasonic Method for Deployment Mechanism Bolt Element Preload Verification
NASA Technical Reports Server (NTRS)
Johnson, Eric C.; Kim, Yong M.; Morris, Fred A.; Mitchell, Joel; Pan, Robert B.
2014-01-01
Deployment mechanisms play a pivotal role in mission success. These mechanisms often incorporate bolt elements for which a preload within a specified range is essential for proper operation. A common practice is to torque these bolt elements to a specified value during installation. The resulting preload, however, can vary significantly with applied torque for a number of reasons. The goal of this effort was to investigate ultrasonic methods as an alternative for bolt preload verification in such deployment mechanisms. A family of non-explosive release mechanisms widely used by satellite manufacturers was chosen for the work. A willing contractor permitted measurements on a sampling of bolt elements for these release mechanisms that were installed by a technician following a standard practice. A variation of approximately 50% (+/- 25%) in the resultant preloads was observed. An alternative ultrasonic method to set the preloads was then developed and calibration data was accumulated. The method was demonstrated on bolt elements installed in a fixture instrumented with a calibrated load cell and designed to mimic production practice. The ultrasonic method yielded results within +/- 3% of the load cell reading. The contractor has since adopted the alternative method for its future production. Introduction
Fu, Hongbo; Dong, Fengzhong; Wang, Huadong; Jia, Junwei; Ni, Zhibo
2017-08-01
In this work, calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is used to analyze a certified stainless steel sample. Due to self-absorption of the spectral lines from the major element Fe and the sparse lines of trace elements, it is usually not easy to construct the Boltzmann plots of all species. A standard reference line method is proposed here to solve this difficulty under the assumption of local thermodynamic equilibrium so that the same temperature value for all elements present into the plasma can be considered. Based on the concentration and rich spectral lines of Fe, the Stark broadening of Fe(I) 381.584 nm and Saha-Boltzmann plots of this element are used to calculate the electron density and the plasma temperature, respectively. In order to determine the plasma temperature accurately, which is seriously affected by self-absorption, a pre-selection procedure for eliminating those spectral lines with strong self-absorption is employed. Then, one spectral line of each element is selected to calculate its corresponding concentration. The results from the standard reference lines with and without self-absorption of Fe are compared. This method allows us to measure trace element content and effectively avoid the adverse effects due to self-absorption.
NASA Astrophysics Data System (ADS)
Ammerlaan, B. A. J.; Holzinger, R.; Jedynska, A. D.; Henzing, J. S.
2017-09-01
Equivalent Black Carbon (EBC) and Elemental Carbon (EC) are different mass metrics to quantify the amount of combustion aerosol. Both metrics have their own measurement technique. In state-of-the-art carbon analysers, optical measurements are used to correct for organic carbon that is not evolving because of pyrolysis. These optical measurements are sometimes used to apply the technique of absorption photometers. Here, we use the transmission measurements of our carbon analyser for simultaneous determination of the elemental carbon concentration and the absorption coefficient. We use MAAP data from the CESAR observatory, the Netherlands, to correct for aerosol-filter interactions by linking the attenuation coefficient from the carbon analyser to the absorption coefficient measured by the MAAP. Application of the calibration to an independent data set of MAAP and OC/EC observations for the same location shows that the calibration is applicable to other observation periods. Because of simultaneous measurements of light absorption properties of the aerosol and elemental carbon, variation in the mass absorption efficiency (MAE) can be studied. We further show that the absorption coefficients and MAE in this set-up are determined within a precision of 10% and 12%, respectively. The precisions could be improved to 4% and 8% when the light transmission signal in the carbon analyser is very stable.
Modelling water flow under glaciers and ice sheets.
Flowers, Gwenn E
2015-04-08
Recent observations of dynamic water systems beneath the Greenland and Antarctic ice sheets have sparked renewed interest in modelling subglacial drainage. The foundations of today's models were laid decades ago, inspired by measurements from mountain glaciers, discovery of the modern ice streams and the study of landscapes evacuated by former ice sheets. Models have progressed from strict adherence to the principles of groundwater flow, to the incorporation of flow 'elements' specific to the subglacial environment, to sophisticated two-dimensional representations of interacting distributed and channelized drainage. Although presently in a state of rapid development, subglacial drainage models, when coupled to models of ice flow, are now able to reproduce many of the canonical phenomena that characterize this coupled system. Model calibration remains generally out of reach, whereas widespread application of these models to large problems and real geometries awaits the next level of development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The worldwide semisubmersible drilling rig fleet is approaching retirement. But replacement is not an attractive option even though dayrates are reaching record highs. In 1991, Schlumberger Sedco Forex managers decided that an alternative might exist if regulators and insurers could be convinced to extend rig life expectancy through restoration. Sedco Forex chose their No. 704 semisubmersible, an 18-year North Sea veteran, to test their process. The first step was to determine what required restoration, meaning fatigue life analysis of each weld on the huge vessel. If inspected, the task would be unacceptably time-consuming and of questionable accuracy. Instead a suitemore » of computer programs modeled the stress seen by each weld, statistically estimated the sea states seen by the rig throughout its North Sea service and calibrated a beam-element model on which to run their computer simulations. The elastic stiffness of the structure and detailed stress analysis of each weld was performed with ANSYS, a commercially available finite-element analysis program. The use of computer codes to evaluate service life extension is described.« less
Lead, platinum and other heavy elements in the primary cosmic radiation: HEAO-3 results
NASA Technical Reports Server (NTRS)
Waddington, C. J.; Binns, W. R.; Brewster, N. R.; Fixsen, D. J.; Garrard, T. L.; Israel, M. H.; Klarmann, J.; Newport, B. J.; Stone, E. C.
1986-01-01
An observation of the abundances of cosmic-ray lead and platinum-group nuclei using data from the HEAO-3 Heavy Nuclei Experiment (HNE) which consisted of ion chambers mounted on both sides of a plastic Cherenkov counter (Binns et al., 1981) is reported. Further analysis with more stringent selections, inclusion of additional data, and a calibration at the LBL Bevalac, have allowed the determination of the abundance ratio of lead and the platinum group of elements for particles that had a cutoff rigidity R(c) 5 GV. The observed ratio for Pb/Pt is distinctly lower than that predicted by any of the standard models for cosmic ray sources. It is possible that the difference is not an indication that the cosmic ray source composition is greatly different from that of the solar system, but rather that there is less Pb in the solar system and in the r-process than is assumed in the standard models.
Simple Parametric Model for Intensity Calibration of Cassini Composite Infrared Spectrometer Data
NASA Technical Reports Server (NTRS)
Brasunas, J.; Mamoutkine, A.; Gorius, N.
2016-01-01
Accurate intensity calibration of a linear Fourier-transform spectrometer typically requires the unknown science target and the two calibration targets to be acquired under identical conditions. We present a simple model suitable for vector calibration that enables accurate calibration via adjustments of measured spectral amplitudes and phases when these three targets are recorded at different detector or optics temperatures. Our model makes calibration more accurate both by minimizing biases due to changing instrument temperatures that are always present at some level and by decreasing estimate variance through incorporating larger averages of science and calibration interferogram scans.
Ali, Azhar A; Shalhoub, Sami S; Cyr, Adam J; Fitzpatrick, Clare K; Maletsky, Lorin P; Rullkoetter, Paul J; Shelburne, Kevin B
2016-01-25
Healthy patellofemoral (PF) joint mechanics are critical to optimal function of the knee joint. Patellar maltracking may lead to large joint reaction loads and high stresses on the articular cartilage, increasing the risk of cartilage wear and the onset of osteoarthritis. While the mechanical sources of PF joint dysfunction are not well understood, links have been established between PF tracking and abnormal kinematics of the tibiofemoral (TF) joint, specifically following cruciate ligament injury and repair. The objective of this study was to create a validated finite element (FE) representation of the PF joint in order to predict PF kinematics and quadriceps force across healthy and pathological specimens. Measurements from a series of dynamic in-vitro cadaveric experiments were used to develop finite element models of the knee for three specimens. Specimens were loaded under intact, ACL-resected and both ACL and PCL-resected conditions. Finite element models of each specimen were constructed and calibrated to the outputs of the intact knee condition, and subsequently used to predict PF kinematics, contact mechanics, quadriceps force, patellar tendon moment arm and patellar tendon angle of the cruciate resected conditions. Model results for the intact and cruciate resected trials successfully matched experimental kinematics (avg. RMSE 4.0°, 3.1mm) and peak quadriceps forces (avg. difference 5.6%). Cruciate resections demonstrated either increased patellar tendon loads or increased joint reaction forces. The current study advances the standard for evaluation of PF mechanics through direct validation of cruciate-resected conditions including specimen-specific representations of PF anatomy. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Takahashi, Tomoko; Thornton, Blair
2017-12-01
This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.
Ablative and transport fractionation of trace elements during laser sampling of glass and copper
NASA Astrophysics Data System (ADS)
Outridge, P. M.; Doherty, W.; Gregoire, D. C.
1997-12-01
The fractionation of trace elements due to ablation and transport processes was quantified during Q-switched infrared laser sampling of glass and copper reference materials. Filter-trapping of the ablated product at different points in the sample introduction system showed ablation and transport sometimes caused opposing fractionation effects, leading to a confounded measure of overall (ablative + transport) fractionation. An unexpected result was the greater ablative fractionation of some elements (Au, Ag, Bi, Te in glass and Au, Be, Bi, Ni, Te in copper) at a higher laser fluence of 1.35 × 10 4W cm -2 than at 0.62 × 10 4W cm -2, which contradicted predictions from modelling studies of ablation processes. With glass, there was an inverse logarithmic relationship between the extent of ablative and overall fractionation and element oxide melting point (OMPs), with elements with OMPs < 1000° C exhibiting overall concentration increases of 20-1340%. Fractionation during transport was quantitatively important for most certified elements in copper, and for the most volatile elements (Au, Ag, Bi, Te) in glass. Elements common to both matrices showed 50-100% higher ablative fractionation in copper, possibly because of greater heat conductance away from the ablation site causing increased element volatilisation or zone refinement. These differences between matrices indicate that non-matrix-matched standardisation is likely to provide inaccurate calibration of laser ablation inductively coupled plasma-mass spectrometry analyses of at least some elements.
NASA Astrophysics Data System (ADS)
Wanders, N.; Bierkens, M. F. P.; de Jong, S. M.; de Roo, A.; Karssenberg, D.
2014-08-01
Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system, in particular the unsaturated zone, remains uncalibrated. Soil moisture observations from satellites have the potential to fill this gap. Here we evaluate the added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: (1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? (2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to calibration based only on discharge observations, such that this leads to improved simulations of soil moisture content and discharge? A dual state and parameter Ensemble Kalman Filter is used to calibrate the hydrological model LISFLOOD for the Upper Danube. Calibration is done using discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS, and ASCAT. Calibration with discharge data improves the estimation of groundwater and routing parameters. Calibration with only remotely sensed soil moisture results in an accurate identification of parameters related to land-surface processes. For the Upper Danube upstream area up to 40,000 km2, calibration on both discharge and soil moisture results in a reduction by 10-30% in the RMSE for discharge simulations, compared to calibration on discharge alone. The conclusion is that remotely sensed soil moisture holds potential for calibration of hydrological models, leading to a better simulation of soil moisture content throughout the catchment and a better simulation of discharge in upstream areas. This article was corrected on 15 SEP 2014. See the end of the full text for details.
Enns, Eva Andrea; Kao, Szu-Yu; Kozhimannil, Katy Backes; Kahn, Judith; Farris, Jill; Kulasingam, Shalini L
2017-10-01
Mathematical models are important tools for assessing prevention and management strategies for sexually transmitted infections. These models are usually developed for a single infection and require calibration to observed epidemiological trends in the infection of interest. Incorporating other outcomes of sexual behavior into the model, such as pregnancy, may better inform the calibration process. We developed a mathematical model of chlamydia transmission and pregnancy in Minnesota adolescents aged 15 to 19 years. We calibrated the model to statewide rates of reported chlamydia cases alone (chlamydia calibration) and in combination with pregnancy rates (dual calibration). We evaluated the impact of calibrating to different outcomes of sexual behavior on estimated input parameter values, predicted epidemiological outcomes, and predicted impact of chlamydia prevention interventions. The two calibration scenarios produced different estimates of the probability of condom use, the probability of chlamydia transmission per sex act, the proportion of asymptomatic infections, and the screening rate among men. These differences resulted in the dual calibration scenario predicting lower prevalence and incidence of chlamydia compared with calibrating to chlamydia cases alone. When evaluating the impact of a 10% increase in condom use, the dual calibration scenario predicted fewer infections averted over 5 years compared with chlamydia calibration alone [111 (6.8%) vs 158 (8.5%)]. While pregnancy and chlamydia in adolescents are often considered separately, both are outcomes of unprotected sexual activity. Incorporating both as calibration targets in a model of chlamydia transmission resulted in different parameter estimates, potentially impacting the intervention effectiveness predicted by the model.
Simulation of water flow in fractured porous medium by using discretized virtual internal bond
NASA Astrophysics Data System (ADS)
Peng, Shujun; Zhang, Zhennan; Li, Chunfang; He, Guofu; Miao, Guoqing
2017-12-01
The discretized virtual internal bond (DVIB) is adopted to simulate the water flow in fractured porous medium. The intact porous medium is permeable because it contains numerous micro cracks and pores. These micro discontinuities construct a fluid channel network. The representative volume of this fluid channel network is modeled as a lattice bond cell with finite number of bonds in statistical sense. Each bond serves as a fluid channel. In fractured porous medium, many bond cells are cut by macro fractures. The conductivity of the fracture facet in a bond cell is taken over by the bonds parallel to the flow direction. The equivalent permeability and volumetric storage coefficient of a micro bond are calibrated based on the ideal bond cell conception, which makes it unnecessary to consider the detailed geometry of a specific element. Such parameter calibration method is flexible and applicable to any type of element. The accuracy check results suggest this method has a satisfying accuracy in both the steady and transient flow simulation. To simulate the massive fractures in rockmass, the bond cells intersected by fracture are assigned aperture values, which are assumed random numbers following a certain distribution law. By this method, any number of fractures can be implicitly incorporated into the background mesh, avoiding the setup of fracture element and mesh modification. The fracture aperture heterogeneity is well represented by this means. The simulation examples suggest that the present method is a feasible, simple and efficient approach to the numerical simulation of water flow in fractured porous medium.
NASA Technical Reports Server (NTRS)
Comber, Brian; Glazer, Stuart
2012-01-01
The James Webb Space Telescope (JWST) is an upcoming flagship observatory mission scheduled to be launched in 2018. Three of the four science instruments are passively cooled to their operational temperature range of 36K to 40K, and the fourth instrument is actively cooled to its operational temperature of approximately 6K. The requirement for multiple thermal zoned results in the instruments being thermally connected to five external radiators via individual high purity aluminum heat straps. Thermal-vacuum and thermal balance testing of the flight instruments at the Integrated Science Instrument Module (ISIM) element level will take place within a newly constructed shroud cooled by gaseous helium inside Goddard Space Flight Center's (GSFC) Space environment Simulator (SES). The flight external radiators are not available during ISIM-level thermal vacuum/thermal testing, so they will be replaced in test with stable and adjustable thermal boundaries with identical physical interfaces to the flight radiators. Those boundaries are provided by specially designed test hardware which also measures the heat flow within each of the five heat straps to an accuracy of less than 2 mW, which is less than 5% of the minimum predicted heat flow values. Measurement of the heat loads to this accuracy is essential to ISIM thermal model correlation, since thermal models are more accurately correlated when temperature data is supplemented by accurate knowledge of heat flows. It also provides direct verification by test of several high-level thermal requirements. Devices that measure heat flow in this manner have historically been referred to a "Q-meters". Perhaps the most important feature of the design of the JWST Q-meters is that it does not depend on the absolute accuracy of its temperature sensors, but rather on knowledge of precise heater power required to maintain a constant temperature difference between sensors on two stages, for which a table is empirically developed during a calibration campaign in a small chamber at GSFC. This paper provides a brief review of Q-meter design, and discusses the Q-meter calibration procedure including calibration chamber modifications and accommodations, handling of differing conditions between calibration and usage, the calibration process itself, and the results of the tests used to determine if the calibration is successful.
Cryogenic radiometers and intensity-stabilized lasers for Eos radiometric calibrations
NASA Technical Reports Server (NTRS)
Foukal, P.; Hoyt, C.; Jauniskis, L.
1991-01-01
Liquid helium-cooled electrical substitution radiometers (ESRs) provide irradiance standards with demonstrated absolute accuracy at the 0.01 percent level, spectrally flat response between the UV and IR, and sensitivity down to 0.1 nW/sq cm. We describe an automated system developed for NASA - Goddard Space Flight Center, consisting of a cryogenic ESR illuminated by servocontrolled laser beams. This system is designed to provide calibration of single-element and array detectors over the spectral range between 257nm in the UV to 10.6 microns in the IR. We also describe a cryogenic ESR optimized for black body calibrations that has been installed at NIST, and another that is under construction for calibrations of the CERES scanners planned for Eos.
Temperature evolution during compaction of pharmaceutical powders.
Zavaliangos, Antonios; Galen, Steve; Cunningham, John; Winstead, Denita
2008-08-01
A numerical approach to the prediction of temperature evolution in tablet compaction is presented here. It is based on a coupled thermomechanical finite element analysis and a calibrated Drucker-Prager Cap model. This approach is capable of predicting transient temperatures during compaction, which cannot be assessed by experimental techniques due to inherent test limitations. Model predictions are validated with infrared (IR) temperature measurements of the top tablet surface after ejection and match well with experiments. The dependence of temperature fields on speed and degree of compaction are naturally captured. The estimated transient temperatures are maximum at the end of compaction at the center of the tablet and close to the die wall next to the powder/die interface.
Updated radiometric calibration for the Landsat-5 thematic mapper reflective bands
Helder, D.L.; Markham, B.L.; Thome, K.J.; Barsi, J.A.; Chander, G.; Malla, R.
2008-01-01
The Landsat-5 Thematic Mapper (TM) has been the workhorse of the Landsat system. Launched in 1984, it continues collecting data through the time frame of this paper. Thus, it provides an invaluable link to the past history of the land features of the Earth's surface, and it becomes imperative to provide an accurate radiometric calibration of the reflective bands to the user community. Previous calibration has been based on information obtained from prelaunch, the onboard calibrator, vicarious calibration attempts, and cross-calibration with Landsat-7. Currently, additional data sources are available to improve this calibration. Specifically, improvements in vicarious calibration methods and development of the use of pseudoinvariant sites for trending provide two additional independent calibration sources. The use of these additional estimates has resulted in a consistent calibration approach that ties together all of the available calibration data sources. Results from this analysis indicate a simple exponential, or a constant model may be used for all bands throughout the lifetime of Landsat-5 TM. Where previously time constants for the exponential models were approximately one year, the updated model has significantly longer time constants in bands 1-3. In contrast, bands 4, 5, and 7 are shown to be best modeled by a constant. The models proposed in this paper indicate calibration knowledge of 5% or better early in life, decreasing to nearly 2% later in life. These models have been implemented at the U.S. Geological Survey Earth Resources Observation and Science (EROS) and are the default calibration used for all Landsat TM data now distributed through EROS. ?? 2008 IEEE.
Evaluation of calibration efficacy under different levels of uncertainty
Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...
2014-06-10
This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less
Calibration of X-Ray Observatories
NASA Technical Reports Server (NTRS)
Weisskopf, Martin C.; L'Dell, Stephen L.
2011-01-01
Accurate calibration of x-ray observatories has proved an elusive goal. Inaccuracies and inconsistencies amongst on-ground measurements, differences between on-ground and in-space performance, in-space performance changes, and the absence of cosmic calibration standards whose physics we truly understand have precluded absolute calibration better than several percent and relative spectral calibration better than a few percent. The philosophy "the model is the calibration" relies upon a complete high-fidelity model of performance and an accurate verification and calibration of this model. As high-resolution x-ray spectroscopy begins to play a more important role in astrophysics, additional issues in accurately calibrating at high spectral resolution become more evident. Here we review the challenges of accurately calibrating the absolute and relative response of x-ray observatories. On-ground x-ray testing by itself is unlikely to achieve a high-accuracy calibration of in-space performance, especially when the performance changes with time. Nonetheless, it remains an essential tool in verifying functionality and in characterizing and verifying the performance model. In the absence of verified cosmic calibration sources, we also discuss the notion of an artificial, in-space x-ray calibration standard. 6th
Neuromusculoskeletal model self-calibration for on-line sequential bayesian moment estimation
NASA Astrophysics Data System (ADS)
Bueno, Diana R.; Montano, L.
2017-04-01
Objective. Neuromusculoskeletal models involve many subject-specific physiological parameters that need to be adjusted to adequately represent muscle properties. Traditionally, neuromusculoskeletal models have been calibrated with a forward-inverse dynamic optimization which is time-consuming and unfeasible for rehabilitation therapy. Non self-calibration algorithms have been applied to these models. To the best of our knowledge, the algorithm proposed in this work is the first on-line calibration algorithm for muscle models that allows a generic model to be adjusted to different subjects in a few steps. Approach. In this paper we propose a reformulation of the traditional muscle models that is able to sequentially estimate the kinetics (net joint moments), and also its full self-calibration (subject-specific internal parameters of the muscle from a set of arbitrary uncalibrated data), based on the unscented Kalman filter. The nonlinearity of the model as well as its calibration problem have obliged us to adopt the sum of Gaussians filter suitable for nonlinear systems. Main results. This sequential Bayesian self-calibration algorithm achieves a complete muscle model calibration using as input only a dataset of uncalibrated sEMG and kinematics data. The approach is validated experimentally using data from the upper limbs of 21 subjects. Significance. The results show the feasibility of neuromusculoskeletal model self-calibration. This study will contribute to a better understanding of the generalization of muscle models for subject-specific rehabilitation therapies. Moreover, this work is very promising for rehabilitation devices such as electromyography-driven exoskeletons or prostheses.
Narukawa, Tomohiro; Inagaki, Kazumi; Zhu, Yanbei; Kuroiwa, Takayoshi; Narushima, Izumi; Chiba, Koichi; Hioki, Akiharu
2012-02-01
A certified reference material, NMIJ CRM 7405-a, for the determination of trace elements and As(V) in algae was developed from the edible marine hijiki (Hizikia fusiforme) and certified by the National Metrology Institute of Japan (NMIJ), the National Institute of Advanced Industrial Science and Technology (AIST). Hijiki was collected from the Pacific coast in the Kanto area of Japan, and was washed, dried, powdered, and homogenized. The hijiki powder was placed in 400 bottles (ca. 20 g each). The concentrations of 18 trace elements and As(V) were determined by two to four independent analytical techniques, including (ID)ICP-(HR)MS, ICP-OES, GFAAS, and HPLC-ICP-MS using calibration solutions prepared from the elemental standard solution of Japan calibration service system (JCSS) and the NMIJ CRM As(V) solution, and whose concentrations are certified and SI traceable. The uncertainties of all the measurements and preparation procedures were evaluated. The values of 18 trace elements and As(V) in the CRM were certified with uncertainty (k = 2).
Wan, Xiong; Wang, Peng
2014-01-01
Laser-induced breakdown spectroscopy (LIBS) is a feasible remote sensing technique used for mineral analysis in some unapproachable places where in situ probing is needed, such as analysis of radioactive elements in a nuclear leak or the detection of elemental compositions and contents of minerals on planetary and lunar surfaces. Here a compact custom 15 m focus optical component, combining a six times beam expander with a telescope, has been built, with which the laser beam of a 1064 nm Nd ; YAG laser is focused on remote minerals. The excited LIBS signals that reveal the elemental compositions of minerals are collected by another compact single lens-based signal acquisition system. In our remote LIBS investigations, the LIBS spectra of an unknown ore have been detected, from which the metal compositions are obtained. In addition, a multi-spectral line calibration (MSLC) method is proposed for the quantitative analysis of elements. The feasibility of the MSLC and its superiority over a single-wavelength determination have been confirmed by comparison with traditional chemical analysis of the copper content in the ore.
NASA Astrophysics Data System (ADS)
Becker, R.; Usman, M.
2017-12-01
A SWAT (Soil Water Assessment Tool) model is applied in the semi-arid Punjab region in Pakistan. The physically based hydrological model is set up to simulate hydrological processes and water resources demands under future land use, climate change and irrigation management scenarios. In order to successfully run the model, detailed focus is laid on the calibration procedure of the model. The study deals with the following calibration issues:i. lack of reliable calibration/validation data, ii. difficulty to accurately model a highly managed system with a physically based hydrological model and iii. use of alternative and spatially distributed data sets for model calibration. In our study area field observations are rare and the entirely human controlled irrigation system renders central calibration parameters (e.g. runoff/curve number) unsuitable, as it can't be assumed that they represent the natural behavior of the hydrological system. From evapotranspiration (ET) however principal hydrological processes can still be inferred. Usman et al. (2015) derived satellite based monthly ET data for our study area based on SEBAL (Surface Energy Balance Algorithm) and created a reliable ET data set which we use in this study to calibrate our SWAT model. The initial SWAT model performance is evaluated with respect to the SEBAL results using correlation coefficients, RMSE, Nash-Sutcliffe efficiencies and mean differences. Particular focus is laid on the spatial patters, investigating the potential of a spatially differentiated parameterization instead of just using spatially uniform calibration data. A sensitivity analysis reveals the most sensitive parameters with respect to changes in ET, which are then selected for the calibration process.Using the SEBAL-ET product we calibrate the SWAT model for the time period 2005-2006 using a dynamically dimensioned global search algorithm to minimize RMSE. The model improvement after the calibration procedure is finally evaluated based on the previously chosen evaluation criteria for the time period 2007-2008. The study reveals the sensitivity of SWAT model parameters to changes in ET in a semi-arid and human controlled system and the potential of calibrating those parameters using satellite derived ET data.
Comparison of global optimization approaches for robust calibration of hydrologic model parameters
NASA Astrophysics Data System (ADS)
Jung, I. W.
2015-12-01
Robustness of the calibrated parameters of hydrologic models is necessary to provide a reliable prediction of future performance of watershed behavior under varying climate conditions. This study investigated calibration performances according to the length of calibration period, objective functions, hydrologic model structures and optimization methods. To do this, the combination of three global optimization methods (i.e. SCE-UA, Micro-GA, and DREAM) and four hydrologic models (i.e. SAC-SMA, GR4J, HBV, and PRMS) was tested with different calibration periods and objective functions. Our results showed that three global optimization methods provided close calibration performances under different calibration periods, objective functions, and hydrologic models. However, using the agreement of index, normalized root mean square error, Nash-Sutcliffe efficiency as the objective function showed better performance than using correlation coefficient and percent bias. Calibration performances according to different calibration periods from one year to seven years were hard to generalize because four hydrologic models have different levels of complexity and different years have different information content of hydrological observation. Acknowledgements This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.
Model Calibration with Censored Data
Cao, Fang; Ba, Shan; Brenneman, William A.; ...
2017-06-28
Here, the purpose of model calibration is to make the model predictions closer to reality. The classical Kennedy-O'Hagan approach is widely used for model calibration, which can account for the inadequacy of the computer model while simultaneously estimating the unknown calibration parameters. In many applications, the phenomenon of censoring occurs when the exact outcome of the physical experiment is not observed, but is only known to fall within a certain region. In such cases, the Kennedy-O'Hagan approach cannot be used directly, and we propose a method to incorporate the censoring information when performing model calibration. The method is applied tomore » study the compression phenomenon of liquid inside a bottle. The results show significant improvement over the traditional calibration methods, especially when the number of censored observations is large.« less
Calibration of CORSIM models under saturated traffic flow conditions.
DOT National Transportation Integrated Search
2013-09-01
This study proposes a methodology to calibrate microscopic traffic flow simulation models. : The proposed methodology has the capability to calibrate simultaneously all the calibration : parameters as well as demand patterns for any network topology....
Numerical Simulation of the Fluid-Structure Interaction of a Surface Effect Ship Bow Seal
NASA Astrophysics Data System (ADS)
Bloxom, Andrew L.
Numerical simulations of fluid-structure interaction (FSI) problems were performed in an effort to verify and validate a commercially available FSI tool. This tool uses an iterative partitioned coupling scheme between CD-adapco's STAR-CCM+ finite volume fluid solver and Simulia's Abaqus finite element structural solver to simulate the FSI response of a system. Preliminary verification and validation work (V&V) was carried out to understand the numerical behavior of the codes individually and together as a FSI tool. Verification and Validation work that was completed included code order verification of the respective fluid and structural solvers with Couette-Poiseuille flow and Euler-Bernoulli beam theory. These results confirmed the 2 nd order accuracy of the spatial discretizations used. Following that, a mixture of solution verifications and model calibrations was performed with the inclusion of the physics models implemented in the solution of the FSI problems. Solution verifications were completed for fluid and structural stand-alone models as well as for the coupled FSI solutions. These results re-confirmed the spatial order of accuracy but for more complex flows and physics models as well as the order of accuracy of the temporal discretizations. In lieu of a good material definition, model calibration is performed to reproduce the experimental results. This work used model calibration for both instances of hyperelastic materials which were presented in the literature as validation cases because these materials were defined as linear elastic. Calibrated, three dimensional models of the bow seal on the University of Michigan bow seal test platform showed the ability to reproduce the experimental results qualitatively through averaging of the forces and seal displacements. These simulations represent the only current 3D results for this case. One significant result of this study is the ability to visualize the flow around the seal and to directly measure the seal resistances at varying cushion pressures, seal immersions, forward speeds, and different seal materials. SES design analysis could greatly benefit from the inclusion of flexible seals in simulations, and this work is a positive step in that direction. In future work, the inclusion of more complex seal geometries and contact will further enhance the capability of this tool.
Analysis of the Best-Fit Sky Model Produced Through Redundant Calibration of Interferometers
NASA Astrophysics Data System (ADS)
Storer, Dara; Pober, Jonathan
2018-01-01
21 cm cosmology provides unique insights into the formation of stars and galaxies in the early universe, and particularly the Epoch of Reionization. Detection of the 21 cm line is challenging because it is generally 4-5 magnitudes weaker than the emission from foreground sources, and therefore the instruments used for detection must be carefully designed and calibrated. 21 cm cosmology is primarily conducted using interferometers, which are difficult to calibrate because of their complex structure. Here I explore the relationship between sky-based calibration, which relies on an accurate and comprehensive sky model, and redundancy-based calibration, which makes use of redundancies in the orientation of the interferometer's dishes. In addition to producing calibration parameters, redundant calibration also produces a best fit model of the sky. In this work I examine that sky model and explore the possibility of using that best fit model as an additional input to improve on sky-based calibration.
Evaluation of “Autotune” calibration against manual calibration of building energy models
Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...
2016-08-26
Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less
NASA Astrophysics Data System (ADS)
Minunno, F.; Peltoniemi, M.; Launiainen, S.; Aurela, M.; Lindroth, A.; Lohila, A.; Mammarella, I.; Minkkinen, K.; Mäkelä, A.
2015-07-01
The problem of model complexity has been lively debated in environmental sciences as well as in the forest modelling community. Simple models are less input demanding and their calibration involves a lower number of parameters, but they might be suitable only at local scale. In this work we calibrated a simplified ecosystem process model (PRELES) to data from multiple sites and we tested if PRELES can be used at regional scale to estimate the carbon and water fluxes of Boreal conifer forests. We compared a multi-site (M-S) with site-specific (S-S) calibrations. Model calibrations and evaluations were carried out by the means of the Bayesian method; Bayesian calibration (BC) and Bayesian model comparison (BMC) were used to quantify the uncertainty in model parameters and model structure. To evaluate model performances BMC results were combined with more classical analysis of model-data mismatch (M-DM). Evapotranspiration (ET) and gross primary production (GPP) measurements collected in 10 sites of Finland and Sweden were used in the study. Calibration results showed that similar estimates were obtained for the parameters at which model outputs are most sensitive. No significant differences were encountered in the predictions of the multi-site and site-specific versions of PRELES with exception of a site with agricultural history (Alkkia). Although PRELES predicted GPP better than evapotranspiration, we concluded that the model can be reliably used at regional scale to simulate carbon and water fluxes of Boreal forests. Our analyses underlined also the importance of using long and carefully collected flux datasets in model calibration. In fact, even a single site can provide model calibrations that can be applied at a wider spatial scale, since it covers a wide range of variability in climatic conditions.
Emery, John M.; Field, Richard V.; Foulk, James W.; ...
2015-05-26
Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not bemore » efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.« less
Thermal analysis of the cryostat feed through for the ITER Tokamak TF feeder
NASA Astrophysics Data System (ADS)
Zhang, Shanwen; Song, Yuntao; Lu, Kun; Wang, Zhongwei; Zhang, Jianfeng; Qin, Yongfa
2017-04-01
In Tokamaks, the toroidal field (TF) coil feeder is an important component that is used to supply the cryogens and electrical power for the TF coils. As a part of the TF feeder, the cryostat-feed through (CFT) is subject to low temperatures of 9 and 80 K inside and room temperature of 300 K outside. Based on the features of the International Thermonuclear Experimental Reactor TF feeder, the thermal performance of the CFT under the nominal conditions is studied. Taking into account the conductive, convective and radiation heat transfer, the finite element model of the CFT is built. Transient thermal analysis is performed to determine the temperatures of the CFT on the 9th day of cooldown. The model is assessed by comparing the cooling curves of the CFT after 9 days. If the simulation and experimental results are the same, the finite element model can be considered as calibrated. The model predicts that the cooling time will be approximately 26 days and the temperature distribution and heat load of the main components are obtained when the CFT reaches thermal equilibrium. This study provides a valid quantitative characterization of the CFT design.
Verstraelen, Toon; Van Speybroeck, Veronique; Waroquier, Michel
2009-07-28
An extensive benchmark of the electronegativity equalization method (EEM) and the split charge equilibration (SQE) model on a very diverse set of organic molecules is presented. These models efficiently compute atomic partial charges and are used in the development of polarizable force fields. The predicted partial charges that depend on empirical parameters are calibrated to reproduce results from quantum mechanical calculations. Recently, SQE is presented as an extension of the EEM to obtain the correct size dependence of the molecular polarizability. In this work, 12 parametrization protocols are applied to each model and the optimal parameters are benchmarked systematically. The training data for the empirical parameters comprise of MP2/Aug-CC-pVDZ calculations on 500 organic molecules containing the elements H, C, N, O, F, S, Cl, and Br. These molecules have been selected by an ingenious and autonomous protocol from an initial set of almost 500,000 small organic molecules. It is clear that the SQE model outperforms the EEM in all benchmark assessments. When using Hirshfeld-I charges for the calibration, the SQE model optimally reproduces the molecular electrostatic potential from the ab initio calculations. Applications on chain molecules, i.e., alkanes, alkenes, and alpha alanine helices, confirm that the EEM gives rise to a divergent behavior for the polarizability, while the SQE model shows the correct trends. We conclude that the SQE model is an essential component of a polarizable force field, showing several advantages over the original EEM.
NASA Technical Reports Server (NTRS)
Doty, Keith L
1992-01-01
The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.
Estimation of k-ε parameters using surrogate models and jet-in-crossflow data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan
2014-11-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of themore » calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k - ε parameters ( C μ, C ε2 , C ε1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model parameters, was parametric uncertainty, which was rectified by calibration. Post-calibration, the dominant contribution to model inaccuraries are due to the structural errors in RANS.« less
Quantitative aspects of inductively coupled plasma mass spectrometry
Wagner, Barbara
2016-01-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
Wesolowski, Edwin A.
2000-01-01
This report presents a proposal for conducting a water-quality modeling study at drought streamflow, a detailed comprehensive plan for collecting the data, and an annual drought-formation monitoring plan. A 30.8 mile reach of the Red River of the North receives treated wastewater from plants at Fargo, North Dakota, and Moorhead, Minnesota, and streamflow from the Sheyenne River. The water-quality modeling study will evaluate the effects of continuous treated-wastewater discharges to the study reach at drought streamflow. The study will define hydraulic characteristics and reaeration and selected reaction coefficients and will calibrate and verity a model.The study includes collecting synoptic water-quality samples for various types of analyses at a number of sites in the study reach. Dye and gas samples will be collected for traveltime and reaeration measurements. Using the Lagrangian reference frame, synoptic water-quality samples will be collected for analysis of nutrients, chlorophyll a, alkalinity, and carbonaceous biochemical oxygen demand. Field measurements will be made of specific conductance, pH, air and water temperature, dissolved oxygen, and sediment oxygen demand. Two sets of water-quality data will be collected. One data set will be used to calibrate the model, and the other data set will be used to verity the model.The DAFLOW/BLTM models will be used to evaluate the effects of the treated wastewater on the water quality of the river. The model will simulate specific conductance, temperature, dissolved oxygen, carbonaceous biochemical oxygen demand, total nitrogen (organic, ammonia, nitrite, nitrate), total orthophosphorus, total phosphorus, and phytoplankton as chlorophyll a.The work plan identifies and discusses the work elements needed for accomplishing the data collection for the study. The work elements specify who will provide personnel, vehicles, instruments, and supplies needed during data collection. The work plan contains instructions for data collection; inventory lists of needed personnel, vehicles, instruments, and supplies; and examples of computations for determining quantities of tracer to be injected into the stream. The work plan also contains an annual drought-formation monitoring plan that includes a 9-month time line that specifies when essential planning actions must occur before actual project start up. Drought streamflows are rare. The annual drought-formation monitoring plan is presented to assist project planning by providing early warning that conditions are favorable to produce drought streamflow. The plan to monitor drought-forming conditions discusses the drought indices to be monitored. To establish a baseline, historic values for some of the drought indices for selected years were reviewed. An annual review of the drought indices is recommended.
Performance and Stability Analyses of Rocket Thrust Chambers with Oxygen/Methane Propellants
NASA Technical Reports Server (NTRS)
Hulka, James R.; Jones, Gregg W.
2010-01-01
Liquid rocket engines using oxygen and methane propellants are being considered by the National Aeronautics and Space Administration (NASA) for future in-space vehicles. This propellant combination has not been previously used in flight-qualified engine systems developed by NASA, so limited test data and analysis results are available at this stage of early development. As part of activities for the Propulsion and Cryogenic Advanced Development (PCAD) project funded under the Exploration Technology Development Program, the NASA Marshall Space Flight Center (MSFC) has been evaluating capability to model combustion performance and stability for oxygen and methane propellants. This activity has been proceeding for about two years and this paper is a summary of results to date. Hot-fire test results of oxygen/methane propellant rocket engine combustion devices for the modeling investigations have come from several sources, including multi-element injector tests with gaseous methane from the 1980s, single element tests with gaseous methane funded through the Constellation University Institutes Program, and multi-element injector tests with both gaseous and liquid methane conducted at the NASA MSFC funded by PCAD. For the latter, test results of both impinging and coaxial element injectors using liquid oxygen and liquid methane propellants are included. Configurations were modeled with two one-dimensional liquid rocket combustion analysis codes, the Rocket Combustor Interactive Design and Analysis code and the Coaxial Injector Combustion Model. Special effort was focused on how these codes can be used to model combustion and performance with oxygen/methane propellants a priori, and what anchoring or calibrating features need to be applied, improved or developed in the future. Low frequency combustion instability (chug) occurred, with frequencies ranging from 150 to 250 Hz, with several multi-element injectors with liquid/liquid propellants, and was modeled using techniques from Wenzel and Szuch. High-frequency combustion instability also occurred at the first tangential (1T) mode, at about 4500 Hz, with several multi-element injectors with liquid/liquid propellants. Analyses of the transverse mode instability were conducted by evaluating injector resonances and empirical methods developed by Hewitt.
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni M.; Ruffo, Paolo; Guadagnini, Alberto
2017-03-01
This study illustrates a procedure conducive to a preliminary risk analysis of overpressure development in sedimentary basins characterized by alternating depositional events of sandstone and shale layers. The approach rests on two key elements: (1) forward modeling of fluid flow and compaction, and (2) application of a model-complexity reduction technique based on a generalized polynomial chaos expansion (gPCE). The forward model considers a one-dimensional vertical compaction processes. The gPCE model is then used in an inverse modeling context to obtain efficient model parameter estimation and uncertainty quantification. The methodology is applied to two field settings considered in previous literature works, i.e. the Venture Field (Scotian Shelf, Canada) and the Navarin Basin (Bering Sea, Alaska, USA), relying on available porosity and pressure information for model calibration. It is found that the best result is obtained when porosity and pressure data are considered jointly in the model calibration procedure. Uncertainty propagation from unknown input parameters to model outputs, such as pore pressure vertical distribution, is investigated and quantified. This modeling strategy enables one to quantify the relative importance of key phenomena governing the feedback between sediment compaction and fluid flow processes and driving the buildup of fluid overpressure in stratified sedimentary basins characterized by the presence of low-permeability layers. The results here illustrated (1) allow for diagnosis of the critical role played by the parameters of quantitative formulations linking porosity and permeability in compacted shales and (2) provide an explicit and detailed quantification of the effects of their uncertainty in field settings.
NASA Astrophysics Data System (ADS)
Schneider, M.; Müller, R.; Krawzcyk, H.; Bachmann, M.; Storch, T.; Mogulsky, V.; Hofer, S.
2012-07-01
The German Aerospace Center DLR - namely the Earth Observation Center EOC and the German Space Operations Center GSOC - is responsible for the establishment of the ground segment of the future German hyperspectral satellite mission EnMAP (Environmental Mapping and Analysis Program). The Earth Observation Center has long lasting experiences with air- and spaceborne acquisition, processing, and analysis of hyperspectral image data. In the first part of this paper, an overview of the radiometric in-flight calibration concept including dark value measurements, deep space measurements, internal lamps measurements and sun measurements is presented. Complemented by pre-launch calibration and characterization these analyses will deliver a detailed and quantitative assessment of possible changes of spectral and radiometric characteristics of the hyperspectral instrument, e.g. due to degradation of single elements. A geometric accuracy of 100 m, which will be improved to 30 m with respect to a used reference image, if it exists, will be achieved by ground processing. Therfore, and for the required co-registration accuracy between SWIR and VNIR channels, additional to the radiometric calibration, also a geometric calibration is necessary. In the second part of this paper, the concept of the geometric calibration is presented in detail. The geometric processing of EnMAP scenes will be based on laboratory calibration results. During repeated passes over selected calibration areas images will be acquired. The update of geometric camera model parameters will be done by an adjustment using ground control points, which will be extracted by automatic image matching. In the adjustment, the improvements of the attitude angles (boresight angles), the improvements of the interior orientation (view vector) and the improvements of the position data are estimated. In this paper, the improvement of the boresight angles is presented in detail as an example. The other values and combinations follow the same rules. The geometric calibration will mainly be executed during the commissioning phase, later in the mission it is only executed if required, i.e. if the geometric accuracy of the produced images is close to or exceeds the requirements of 100 m or 30 m respectively, whereas the radiometric calibration will be executed periodically during the mission with a higher frequency during commissioning phase.
Cunningham, J C; Sinka, I C; Zavaliangos, A
2004-08-01
In this first of two articles on the modeling of tablet compaction, the experimental inputs related to the constitutive model of the powder and the powder/tooling friction are determined. The continuum-based analysis of tableting makes use of an elasto-plastic model, which incorporates the elements of yield, plastic flow potential, and hardening, to describe the mechanical behavior of microcrystalline cellulose over the range of densities experienced during tableting. Specifically, a modified Drucker-Prager/cap plasticity model, which includes material parameters such as cohesion, internal friction, and hydrostatic yield pressure that evolve with the internal state variable relative density, was applied. Linear elasticity is assumed with the elastic parameters, Young's modulus, and Poisson's ratio dependent on the relative density. The calibration techniques were developed based on a series of simple mechanical tests including diametrical compression, simple compression, and die compaction using an instrumented die. The friction behavior is measured using an instrumented die and the experimental data are analyzed using the method of differential slices. The constitutive model and frictional properties are essential experimental inputs to the finite element-based model described in the companion article. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93:2022-2039, 2004
NASA Astrophysics Data System (ADS)
Smith, K. A.; Barker, L. J.; Harrigan, S.; Prudhomme, C.; Hannaford, J.; Tanguy, M.; Parry, S.
2017-12-01
Earth and environmental models are relied upon to investigate system responses that cannot otherwise be examined. In simulating physical processes, models have adjustable parameters which may, or may not, have a physical meaning. Determining the values to assign to these model parameters is an enduring challenge for earth and environmental modellers. Selecting different error metrics by which the models results are compared to observations will lead to different sets of calibrated model parameters, and thus different model results. Furthermore, models may exhibit `equifinal' behaviour, where multiple combinations of model parameters lead to equally acceptable model performance against observations. These decisions in model calibration introduce uncertainty that must be considered when model results are used to inform environmental decision-making. This presentation focusses on the uncertainties that derive from the calibration of a four parameter lumped catchment hydrological model (GR4J). The GR models contain an inbuilt automatic calibration algorithm that can satisfactorily calibrate against four error metrics in only a few seconds. However, a single, deterministic model result does not provide information on parameter uncertainty. Furthermore, a modeller interested in extreme events, such as droughts, may wish to calibrate against more low flows specific error metrics. In a comprehensive assessment, the GR4J model has been run with 500,000 Latin Hypercube Sampled parameter sets across 303 catchments in the United Kingdom. These parameter sets have been assessed against six error metrics, including two drought specific metrics. This presentation compares the two approaches, and demonstrates that the inbuilt automatic calibration can outperform the Latin Hypercube experiment approach in single metric assessed performance. However, it is also shown that there are many merits of the more comprehensive assessment, which allows for probabilistic model results, multi-objective optimisation, and better tailoring to calibrate the model for specific applications such as drought event characterisation. Modellers and decision-makers may be constrained in their choice of calibration method, so it is important that they recognise the strengths and limitations of their chosen approach.
NASA Astrophysics Data System (ADS)
Lazic, V.; De Ninno, A.
2017-11-01
The laser induced plasma spectroscopy was applied on particles attached on substrate represented by a silica wafer covered with a thin oil film. The substrate itself weakly interacts with a ns Nd:YAG laser (1064 nm) while presence of particles strongly enhances the plasma emission, here detected by a compact spectrometer array. Variations of the sample mass from one laser spot to another exceed one order of magnitude, as estimated by on-line photography and the initial image calibration for different sample loadings. Consequently, the spectral lines from particles show extreme intensity fluctuations from one sampling point to another, between the detection threshold and the detector's saturation in some cases. In such conditions the common calibration approach based on the averaged spectra, also when considering ratios of the element lines i.e. concentrations, produces errors too large for measuring the sample compositions. On the other hand, intensities of an analytical and the reference line from single shot spectra are linearly correlated. The corresponding slope depends on the concentration ratio and it is weakly sensitive to fluctuations of the plasma temperature inside the data set. A use of the slopes for constructing the calibration graphs significantly reduces the error bars but it does not eliminate the point scattering caused by the matrix effect, which is also responsible for large differences in the average plasma temperatures among the samples. Well aligned calibration points were obtained after identifying the couples of transitions less sensitive to variations of the plasma temperature, and this was achieved by simple theoretical simulations. Such selection of the analytical lines minimizes the matrix effect, and together with the chosen calibration approach, allows to measure the relative element concentrations even in highly unstable laser induced plasmas.
Fast Estimation of Strains for Cross-Beams Six-Axis Force/Torque Sensors by Mechanical Modeling
Ma, Junqing; Song, Aiguo
2013-01-01
Strain distributions are crucial criteria of cross-beams six-axis force/torque sensors. The conventional method for calculating the criteria is to utilize Finite Element Analysis (FEA) to get numerical solutions. This paper aims to obtain analytical solutions of strains under the effect of external force/torque in each dimension. Genetic mechanical models for cross-beams six-axis force/torque sensors are proposed, in which deformable cross elastic beams and compliant beams are modeled as quasi-static Timoshenko beam. A detailed description of model assumptions, model idealizations, application scope and model establishment is presented. The results are validated by both numerical FEA simulations and calibration experiments, and test results are found to be compatible with each other for a wide range of geometric properties. The proposed analytical solutions are demonstrated to be an accurate estimation algorithm with higher efficiency. PMID:23686144
Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.
2015-01-01
While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726
Analysis of Trace Siderophile Elements at High Spatial Resolution Using Laser Ablation ICP-MS
NASA Astrophysics Data System (ADS)
Campbell, A. J.; Humayun, M.
2006-05-01
Laser ablation inductively coupled plasma mass spectometry is an increasingly important method of performing spatially resolved trace element analyses. Over the last several years we have applied this technique to measure siderophile element distributions at the ppm level in a variety of natural and synthetic samples, especially metallic phases in meteorites and experimental run products intended for trace element partitioning studies. These samples frequently require trace element analyses to be made at a finer spatial resolution (25 microns or better) than is frequently attained using LA-ICP-MS. In this presentation we review analytical protocols that were developed to optimize the LA-ICP-MS measurements for high spatial resolution. Particular attention is paid to the trade-offs involving sensitivity, ablation pit depth and diameter, background levels, and number of elements measured. To maximize signal/background ratios and avoid difficulties associated with ablating to depths greater than the ablation pit diameter, measurement involved integration of rapidly varying, transient but well-behaved signals. The abundances of platinum group elements and other siderophile elements in ferrous metals were calibrated against well-characterized standards, including iron meteorites and NIST certified steels. The calibrations can be set against the known abundance of an independently determined element, but normalization to 100 percent can also be employed, and was more useful in many circumstances. Evaluation of uncertainties incorporated counting statistics as well as a measure of instrumental uncertainty, determined by replicate analyses of the standards. These methods have led to a number of insights into the formation and chemical processing of metal in the early solar system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anh Bui; Nam Dinh; Brian Williams
In addition to validation data plan, development of advanced techniques for calibration and validation of complex multiscale, multiphysics nuclear reactor simulation codes are a main objective of the CASL VUQ plan. Advanced modeling of LWR systems normally involves a range of physico-chemical models describing multiple interacting phenomena, such as thermal hydraulics, reactor physics, coolant chemistry, etc., which occur over a wide range of spatial and temporal scales. To a large extent, the accuracy of (and uncertainty in) overall model predictions is determined by the correctness of various sub-models, which are not conservation-laws based, but empirically derived from measurement data. Suchmore » sub-models normally require extensive calibration before the models can be applied to analysis of real reactor problems. This work demonstrates a case study of calibration of a common model of subcooled flow boiling, which is an important multiscale, multiphysics phenomenon in LWR thermal hydraulics. The calibration process is based on a new strategy of model-data integration, in which, all sub-models are simultaneously analyzed and calibrated using multiple sets of data of different types. Specifically, both data on large-scale distributions of void fraction and fluid temperature and data on small-scale physics of wall evaporation were simultaneously used in this work’s calibration. In a departure from traditional (or common-sense) practice of tuning/calibrating complex models, a modern calibration technique based on statistical modeling and Bayesian inference was employed, which allowed simultaneous calibration of multiple sub-models (and related parameters) using different datasets. Quality of data (relevancy, scalability, and uncertainty) could be taken into consideration in the calibration process. This work presents a step forward in the development and realization of the “CIPS Validation Data Plan” at the Consortium for Advanced Simulation of LWRs to enable quantitative assessment of the CASL modeling of Crud-Induced Power Shift (CIPS) phenomenon, in particular, and the CASL advanced predictive capabilities, in general. This report is prepared for the Department of Energy’s Consortium for Advanced Simulation of LWRs program’s VUQ Focus Area.« less
Wilson, S.A.; Ridley, W.I.; Koenig, A.E.
2002-01-01
The requirements of standard materials for LA-ICP-MS analysis have been difficult to meet for the determination of trace elements in sulfides. We describe a method for the production of synthetic sulfides by precipitation from solution. The method is detailed by the production of approximately 200 g of a material, PS-1, with a suite of chalcophilic trace elements in an Fe-Zn-Cu-S matrix. Preliminary composition data, together with an evaluation of the homogeneity for individual elements, suggests that this type of material meets the requirements for a sulfide calibration standard that allows for quantitative analysis. Contamination of the standard with Na suggests that H2S gas may prove a better sulfur source for future experiments. We recommend that calibration data be collected in whatever mode is closest to that employed for the analysis of the unknown material, because of variable fractionation effects as a function of analytical mode. For instance, if individual spot analyses are attempted on unknown sample, then a raster of several individual spot analyses, not a continuous scan, should be collected and averaged for the standard. Hg and Au are exceptions to the above and calibration data should always be collected in a scanning mode. Au is more heterogeneously distributed than other trace metals and large-area scans are required to provide an average value for calibration purposes. We emphasize that the values given in Table 1 are preliminary values. Further chemical characterization of this standard, through a round-robin analysis program, will allow the USGS to provide both certified and recommended values for individual elements. The USGS has developed PS-1 as a potential new LA-ICP-MS standard for use by the analytical community, and requests for this material should be addressed to S. Wilson. However, it is stressed that an important aspect of the method described here is the flexibility for individual investigators to produce sulfides with a wide range of trace metals in variable matrices. For example, PS-1 is not well suited to the analysis of galena, and it would be relatively straightforward for other standards to be developed with Pb present in the matrix as a major constituent. These standards can be made easily and cheaply in a standard wet chemistry laboratory using equipment and chemicals that are readily available.
Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar
2012-01-01
Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.
NASA Astrophysics Data System (ADS)
Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix
2017-12-01
Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.
Davari, Seyyed Ali; Hu, Sheng; Mukherjee, Dibyendu
2017-03-01
Intermetallic nanoalloys (NAs) and nanocomposites (NCs) have increasingly gained prominence as efficient catalytic materials in electrochemical energy conversion and storage systems. But their morphology and chemical compositions play critical role in tuning their catalytic activities, and precious metal contents. While advanced microscopy techniques facilitate morphological characterizations, traditional chemical characterizations are either qualitative or extremely involved. In this study, we apply Laser Induced Breakdown Spectroscopy (LIBS) for quantitative compositional analysis of NAs and NCs synthesized with varied elemental ratios by our in-house built pulsed laser ablation technique. Specifically, elemental ratios of binary PtNi, PdCo (NAs) and PtCo (NCs) of different compositions are determined from LIBS measurements employing an internal calibration scheme using the bulk matrix species as internal standards. Morphology and qualitative elemental compositions of the aforesaid NAs and NCs are confirmed from Transmission Electron Microscopy (TEM) images and Energy Dispersive X-ray Spectroscopy (EDX) measurements. LIBS experiments are carried out in ambient conditions with the NA and NC samples drop cast on silicon wafers after centrifugation to increase their concentrations. The technique does not call for cumbersome sample preparations including acid digestions and external calibration standards commonly required in Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES) techniques. Yet the quantitative LIBS results are in good agreement with the results from ICP-OES measurements. Our results indicate the feasibility of using LIBS in future for rapid and in-situ quantitative chemical characterizations of wide classes of synthesized NAs and NCs. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.
2014-12-01
MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.
NASA Astrophysics Data System (ADS)
Tapia Gutierrez, Patricio Enrique
Whitetopping (WT) is a rehabilitation method to resurface deteriorated asphalt pavements. While some of these composite pavements have performed very well carrying heavy load, other have shown poor performance with early cracking. With the objective of analyzing the applicability of WT pavements under Florida conditions, a total of nine full-scale WT test sections were constructed and tested using a Heavy Vehicle Simulator (HVS) in the APT facility at the FDOT Material Research Park. The test sections were instrumented to monitor both strain and temperature. A 3-D finite element model was developed to analyze the WT test sections. The model was calibrated and verified using measured FWD deflections and HVS load-induced strains from the test sections. The model was then used to evaluate the potential performance of these test sections under critical temperature-load condition in Florida. Six of the WT pavement test sections had a bonded concrete-asphalt interface by milling, cleaning and spraying with water the asphalt surface. This method produced excellent bonding at the interface, with shear strength of 195 to 220 psi. Three of the test sections were intended to have an unbonded concrete-asphalt interface by applying a debonding agent in the asphalt surface. However, shear strengths between 119 and 135 psi and a careful analysis of the strain and the temperature data indicated a partial bond condition. The computer model was able to satisfactorily model the behavior of the composite pavement by mainly considering material properties from standard laboratory tests and calibrating the spring elements used to model the interface. Reasonable matches between the measured and the calculated strains were achieved when a temperature-dependent AC elastic modulus was included in the analytical model. The expected numbers of repetitions of the 24-kip single axle loads at critical thermal condition were computed for the nine test sections based on maximum tensile stresses and fatigue theory. The results showed that 4" slabs can be used for heavy loads only for low-volume traffic. To withstand the critical load without fear of fatigue failure, 6" slabs and 8" slabs would be needed for joint spacings of 4' and 6', respectively.
The Scottish way - getting results in soil spectroscopy without spending money
NASA Astrophysics Data System (ADS)
Aitkenhead, Matt; Cameron, Clare; Gaskin, Graham; Choisy, Bastien; Coull, Malcolm; Black, Helaina
2016-04-01
Achieving soil characterisation using spectroscopy requires several things. These include soil data to develop or train a calibration model, a method of capturing spectra, the ability to actually develop a calibration model and also additional data to reinforce the model by introducing some form of stratification or site-specific information. Each of these steps requires investment in both time and money. Here we present an approach developed at the James Hutton Institute that achieves the end goal with minimal cost, by making as much use as possible of existing soil and environmental datasets for Scotland. The spectroscopy device that has been developed is PHYLIS (Prototype HYperspectral Low-cost Imaging System) that was constructed using inexpensive optical components, and uses a basic digital camera to produce visible-range spectra. The results show that for a large number of soil parameters, it is possible to estimate values either very well (RSQ > 0.9) (LOI, C, exchangeable H), well (RSQ > 0.75) (N, pH) or moderately (RSQ > 0.5) (Mg, Na, K, Fe, Al, sand, silt, clay). The methods used to achieve these results are described. A number of additional parameters were not well estimated (elemental concentrations), and we describe how work is ongoing to improve our ability to estimate these using similar technology and data.
Miura, Michiaki; Nakamura, Junichi; Matsuura, Yusuke; Wako, Yasushi; Suzuki, Takane; Hagiwara, Shigeo; Orita, Sumihisa; Inage, Kazuhide; Kawarai, Yuya; Sugano, Masahiko; Nawata, Kento; Ohtori, Seiji
2017-12-16
Finite element analysis (FEA) of the proximal femur has been previously validated with large mesh size, but these were insufficient to simulate the model with small implants in recent studies. This study aimed to validate the proximal femoral computed tomography (CT)-based specimen-specific FEA model with smaller mesh size using fresh frozen cadavers. Twenty proximal femora from 10 cadavers (mean age, 87.1 years) were examined. CT was performed on all specimens with a calibration phantom. Nonlinear FEA prediction with stance configuration was performed using Mechanical Finder (mesh,1.5 mm tetrahedral elements; shell thickness, 0.2 mm; Poisson's coefficient, 0.3), in comparison with mechanical testing. Force was applied at a fixed vertical displacement rate, and the magnitude of the applied load and displacement were continuously recorded. The fracture load and stiffness were calculated from force-displacement curve, and the correlation between mechanical testing and FEA prediction was examined. A pilot study with one femur revealed that the equations proposed by Keller for vertebra were the most reproducible for calculating Young's modulus and the yield stress of elements of the proximal femur. There was a good linear correlation between fracture loads of mechanical testing and FEA prediction (R 2 = 0.6187) and between the stiffness of mechanical testing and FEA prediction (R 2 = 0.5499). There was a good linear correlation between fracture load and stiffness (R 2 = 0.6345) in mechanical testing and an excellent correlation between these (R 2 = 0.9240) in FEA prediction. CT-based specimen-specific FEA model of the proximal femur with small element size was validated using fresh frozen cadavers. The equations proposed by Keller for vertebra were found to be the most reproducible for the proximal femur in elderly people.
NASA Astrophysics Data System (ADS)
Ercan, Mehmet Bulent
Watershed-scale hydrologic models are used for a variety of applications from flood prediction, to drought analysis, to water quality assessments. A particular challenge in applying these models is calibration of the model parameters, many of which are difficult to measure at the watershed-scale. A primary goal of this dissertation is to contribute new computational methods and tools for calibration of watershed-scale hydrologic models and the Soil and Water Assessment Tool (SWAT) model, in particular. SWAT is a physically-based, watershed-scale hydrologic model developed to predict the impact of land management practices on water quality and quantity. The dissertation follows a manuscript format meaning it is comprised of three separate but interrelated research studies. The first two research studies focus on SWAT model calibration, and the third research study presents an application of the new calibration methods and tools to study climate change impacts on water resources in the Upper Neuse Watershed of North Carolina using SWAT. The objective of the first two studies is to overcome computational challenges associated with calibration of SWAT models. The first study evaluates a parallel SWAT calibration tool built using the Windows Azure cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed using three watersheds of increasing size (the Eno, Upper Neuse, and Neuse) for both a 2 year and 10 year simulation duration. Leveraging the cloud as an on demand computing resource allowed for a significantly reduced calibration time such that calibration of the Neuse watershed went from taking 207 hours on a personal computer to only 3.4 hours using 256 cores in the Azure cloud. The second study aims at increasing SWAT model calibration efficiency by creating an open source, multi-objective calibration tool using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the methodological advances presented in the first two studies. Therefore, the dissertation contains three independent by interrelated studies that collectively advance the field of watershed-scale hydrologic modeling and analysis.
A Combined Experimental and Computational Approach to Subject-Specific Analysis of Knee Joint Laxity
Harris, Michael D.; Cyr, Adam J.; Ali, Azhar A.; Fitzpatrick, Clare K.; Rullkoetter, Paul J.; Maletsky, Lorin P.; Shelburne, Kevin B.
2016-01-01
Modeling complex knee biomechanics is a continual challenge, which has resulted in many models of varying levels of quality, complexity, and validation. Beyond modeling healthy knees, accurately mimicking pathologic knee mechanics, such as after cruciate rupture or meniscectomy, is difficult. Experimental tests of knee laxity can provide important information about ligament engagement and overall contributions to knee stability for development of subject-specific models to accurately simulate knee motion and loading. Our objective was to provide combined experimental tests and finite-element (FE) models of natural knee laxity that are subject-specific, have one-to-one experiment to model calibration, simulate ligament engagement in agreement with literature, and are adaptable for a variety of biomechanical investigations (e.g., cartilage contact, ligament strain, in vivo kinematics). Calibration involved perturbing ligament stiffness, initial ligament strain, and attachment location until model-predicted kinematics and ligament engagement matched experimental reports. Errors between model-predicted and experimental kinematics averaged <2 deg during varus–valgus (VV) rotations, <6 deg during internal–external (IE) rotations, and <3 mm of translation during anterior–posterior (AP) displacements. Engagement of the individual ligaments agreed with literature descriptions. These results demonstrate the ability of our constraint models to be customized for multiple individuals and simultaneously call attention to the need to verify that ligament engagement is in good general agreement with literature. To facilitate further investigations of subject-specific or population based knee joint biomechanics, data collected during the experimental and modeling phases of this study are available for download by the research community. PMID:27306137
Five-Hole Flow Angle Probe Calibration for the NASA Glenn Icing Research Tunnel
NASA Technical Reports Server (NTRS)
Gonsalez, Jose C.; Arrington, E. Allen
1999-01-01
A spring 1997 test section calibration program is scheduled for the NASA Glenn Research Center Icing Research Tunnel following the installation of new water injecting spray bars. A set of new five-hole flow angle pressure probes was fabricated to properly calibrate the test section for total pressure, static pressure, and flow angle. The probes have nine pressure ports: five total pressure ports on a hemispherical head and four static pressure ports located 14.7 diameters downstream of the head. The probes were calibrated in the NASA Glenn 3.5-in.-diameter free-jet calibration facility. After completing calibration data acquisition for two probes, two data prediction models were evaluated. Prediction errors from a linear discrete model proved to be no worse than those from a full third-order multiple regression model. The linear discrete model only required calibration data acquisition according to an abridged test matrix, thus saving considerable time and financial resources over the multiple regression model that required calibration data acquisition according to a more extensive test matrix. Uncertainties in calibration coefficients and predicted values of flow angle, total pressure, static pressure. Mach number. and velocity were examined. These uncertainties consider the instrumentation that will be available in the Icing Research Tunnel for future test section calibration testing.
NASA Technical Reports Server (NTRS)
Eskins, Jonathan
1988-01-01
The problem of determining the forces and moments acting on a wind tunnel model suspended in a Magnetic Suspension and Balance System is addressed. Two calibration methods were investigated for three types of model cores, i.e., Alnico, Samarium-Cobalt, and a superconducting solenoid. Both methods involve calibrating the currents in the electromagnetic array against known forces and moments. The first is a static calibration method using calibration weights and a system of pulleys. The other method, dynamic calibration, involves oscillating the model and using its inertia to provide calibration forces and moments. Static calibration data, found to produce the most reliable results, is presented for three degrees of freedom at 0, 15, and -10 deg angle of attack. Theoretical calculations are hampered by the inability to represent iron-cored electromagnets. Dynamic calibrations, despite being quicker and easier to perform, are not as accurate as static calibrations. Data for dynamic calibrations at 0 and 15 deg is compared with the relevant static data acquired. Distortion of oscillation traces is cited as a major source of error in dynamic calibrations.
USDA-ARS?s Scientific Manuscript database
Information to support application of hydrologic and water quality (H/WQ) models abounds, yet modelers commonly use arbitrary, ad hoc methods to conduct, document, and report model calibration, validation, and evaluation. Consistent methods are needed to improve model calibration, validation, and e...
Multi-proxy experimental calibration in cold water corals for high resolution paleoreconstructions
NASA Astrophysics Data System (ADS)
Pelejero, C.; Martínez-Dios, A.; Ko, S.; Sherrell, R. M.; Kozdon, R.; López-Sanz, À.; Calvo, E.
2017-12-01
Cold-water corals (CWCs) display an almost cosmopolitan distribution over a wide range of depths. Similar to their tropical counterparts, they can provide continuous, high-resolution records of up to a century or more. Several CWC elemental and isotopic ratios have been suggested as useful proxies, but robust calibrations under controlled conditions in aquaria are needed. Whereas a few such calibrations have been performed for tropical corals, they are still pending for CWCs. This reflects the technical challenges involved in maintaining these slow-growing animals alive during the long-term experiments required to achieve sufficient skeletal growth for geochemical analyses. We will show details of the set up and initial stages of a long-term experiment being run at the ICM (Barcelona), where live specimens (>150) of Desmophyllum dianthus sampled in Comau Fjord (Chile) are kept under controlled and manipulated physical chemistry (temperature, pH, phosphate, barium, cadmium) and feeding conditions. With this set up, we aim to calibrate experimentally several specific elemental ratios including P/Ca, Ba/Ca, Cd/Ca, B/Ca, U/Ca and Mg/Li as proxies of nutrients dynamics, pH, carbonate ion concentration and temperature. For the trace element analysis, we are analyzing coral skeletons using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS), running quantitative analyses on spot sizes of tens of microns, and comparing to micromilling and solution ICP-MS. Preliminary data obtained using these techniques will be presented, as well as measurements of calcification rate. Since coral-water corals are potentially vulnerable to ocean acidification, the same experiment is being exploited to assess potential effects of the pH stressor in D. dianthus; main findings to date will be summarized.
NASA Astrophysics Data System (ADS)
Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Rode, Michael
2014-05-01
Hydrological water quality modeling is increasingly used for investigating runoff and nutrient transport processes as well as watershed management but it is mostly unclear how data availablity determins model identification. In this study, the HYPE (HYdrological Predictions for the Environment) model, which is a process-based, semi-distributed hydrological water quality model, was applied in two different mesoscale catchments (Selke (463 km2) and Weida (99 km2)) located in central Germany to simulate discharge and inorganic nitrogen (IN) transport. PEST and DREAM(ZS) were combined with the HYPE model to conduct parameter calibration and uncertainty analysis. Split-sample test was used for model calibration (1994-1999) and validation (1999-2004). IN concentration and daily IN load were found to be highly correlated with discharge, indicating that IN leaching is mainly controlled by runoff. Both dynamics and balances of water and IN load were well captured with NSE greater than 0.83 during validation period. Multi-objective calibration (calibrating hydrological and water quality parameters simultaneously) was found to outperform step-wise calibration in terms of model robustness. Multi-site calibration was able to improve model performance at internal sites, decrease parameter posterior uncertainty and prediction uncertainty. Nitrogen-process parameters calibrated using continuous daily averages of nitrate-N concentration observations produced better and more robust simulations of IN concentration and load, lower posterior parameter uncertainty and IN concentration prediction uncertainty compared to the calibration against uncontinuous biweekly nitrate-N concentration measurements. Both PEST and DREAM(ZS) are efficient in parameter calibration. However, DREAM(ZS) is more sound in terms of parameter identification and uncertainty analysis than PEST because of its capability to evolve parameter posterior distributions and estimate prediction uncertainty based on global search and Bayesian inference schemes.
Error-in-variables models in calibration
NASA Astrophysics Data System (ADS)
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
Realistic kinetic loading of the jaw system during single chewing cycles: a finite element study.
Martinez Choy, S E; Lenz, J; Schweizerhof, K; Schmitter, M; Schindler, H J
2017-05-01
Although knowledge of short-range kinetic interactions between antagonistic teeth during mastication is of essential importance for ensuring interference-free fixed dental reconstructions, little information is available. In this study, the forces on and displacements of the teeth during kinetic molar biting simulating the power stroke of a chewing cycle were investigated by use of a finite-element model that included all the essential components of the human masticatory system, including an elastic food bolus. We hypothesised that the model can approximate the loading characteristics of the dentition found in previous experimental studies. The simulation was a transient analysis, that is, it considered the dynamic behaviour of the jaw. In particular, the reaction forces on the teeth and joints arose from contact, rather than nodal forces or constraints. To compute displacements of the teeth, the periodontal ligament (PDL) was modelled by use of an Ogden material model calibrated on the basis of results obtained in previous experiments. During the initial holding phase of the power stroke, bite forces were aligned with the roots of the molars until substantial deformation of the bolus occurred. The forces tilted the molars in the bucco-lingual and mesio-distal directions, but as the intrusive force increased the teeth returned to their initial configuration. The Ogden material model used for the PDL enabled accurate prediction of the displacements observed in experimental tests. In conclusion, the comprehensive kinetic finite element model reproduced the kinematic and loading characteristics of previous experimental investigations. © 2017 John Wiley & Sons Ltd.
New temperature model of the Netherlands from new data and novel modelling methodology
NASA Astrophysics Data System (ADS)
Bonté, Damien; Struijk, Maartje; Békési, Eszter; Cloetingh, Sierd; van Wees, Jan-Diederik
2017-04-01
Deep geothermal energy has grown in interest in Western Europe in the last decades, for direct use but also, as the knowledge of the subsurface improves, for electricity generation. In the Netherlands, where the sector took off with the first system in 2005, geothermal energy is seen has a key player for a sustainable future. The knowledge of the temperature subsurface, together with the available flow from the reservoir, is an important factor that can determine the success of a geothermal energy project. To support the development of deep geothermal energy system in the Netherlands, we have made a first assessment of the subsurface temperature based on thermal data but also on geological elements (Bonté et al, 2012). An outcome of this work was ThermoGIS that uses the temperature model. This work is a revision of the model that is used in ThermoGIS. The improvement from the first model are multiple, we have been improving not only the dataset used for the calibration and structural model, but also the methodology trough an improved software (called b3t). The temperature dataset has been updated by integrating temperature on the newly accessible wells. The sedimentary description in the basin has been improved by using an updated and refined structural model and an improved lithological definition. A major improvement in from the methodology used to perform the modelling, with b3t the calibration is made not only using the lithospheric parameters but also using the thermal conductivity of the sediments. The result is a much more accurate definition of the parameters for the model and a perfected handling of the calibration process. The result obtain is a precise and improved temperature model of the Netherlands. The thermal conductivity variation in the sediments associated with geometry of the layers is an important factor of temperature variations and the influence of the Zechtein salt in the north of the country is important. In addition, the radiogenic heat production in the crust shows a significant impact. From the temperature values, also identify in the lower part of the basin, deep convective systems that could be major geothermal energy target in the future.
Walsh, Colin G; Sharman, Kavya; Hripcsak, George
2017-12-01
Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
NASA Technical Reports Server (NTRS)
Clayton, J. Louie
2002-01-01
This study provides development and verification of analysis methods used to assess performance of a carbon fiber rope (CFR) thermal barrier system that is currently being qualified for use in Reusable Solid Rocket Motor (RSRM) nozzle joint-2. Modeled geometry for flow calculations considers the joint to be vented with the porous CFR barriers placed in the 'open' assembly gap. Model development is based on a 1-D volume filling approach where flow resistances (assembly gap and CFRs) are defined by serially connected internal flow and the porous media 'Darcy' relationships. Combustion gas flow rates are computed using the volume filling code by assuming a lumped distribution total joint fill volume on a per linear circumferential inch basis. Gas compressibility, friction and heat transfer are included in the modeling. Gas-to-wall heat transfer is simulated by concurrent solution of the compressible flow equations and a large thermal 2-D finite element (FE) conduction grid. The derived numerical technique loosely couples the FE conduction matrix with the compressible gas flow equations. Free constants that appear in the governing equations are calibrated by parametric model comparison to hot fire subscale test results. The calibrated model is then used to make full-scale motor predictions using RSRM aft dome environments. Model results indicate that CFR thermal barrier systems will provide a thermally benign and controlled pressurization environment for the RSRM nozzle joint-2 primary seal activation.
NASA Technical Reports Server (NTRS)
Clayton, J. Louie; Phelps, Lisa (Technical Monitor)
2001-01-01
This study provides for development and verification of analysis methods used to assess performance of a carbon fiber rope (CFR) thermal barrier system that is currently being qualified for use in Reusable Solid Rocket Motor (RSRM) nozzle joint-2. Modeled geometry for flow calculations considers the joint to be vented with the porous CFR barriers placed in the "open' assembly gap. Model development is based on a 1-D volume filling approach where flow resistances (assembly gap and CFRs) are defined by serially connected internal flow and the porous media "Darcy" relationships. Combustion gas flow rates are computed using the volume filling code by assuming a lumped distribution total joint fill volume on a per linear circumferential inch basis. Gas compressibility, friction and heat transfer are included in the modeling. Gas-to-wall heat transfer is simulated by concurrent solution of the compressible flow equations and a large thermal 2-D finite element (FE) conduction grid. The derived numerical technique loosely couples the FE conduction matrix with the compressible gas flow equations, Free constants that appear in the governing equations are calibrated by parametric model comparison to hot fire subscale test results. The calibrated model is then used to make full-scale motor predictions using RSRM aft dome environments. Model results indicate that CFR thermal barrier systems will provide a thermally benign and controlled pressurization environment for the RSRM nozzle joint-2 primary seal activation.
Chen, Xuanzhen; Peng, Yong; Peng, Shan; Yao, Song; Chen, Chao; Xu, Ping
2017-01-01
This study aims to investigate the flow and fracture behavior of aluminum alloy 6082-T6 (AA6082-T6) at different strain rates and triaxialities. Two groups of Charpy impact tests were carried out to further investigate its dynamic impact fracture property. A series of tensile tests and numerical simulations based on finite element analysis (FEA) were performed. Experimental data on smooth specimens under various strain rates ranging from 0.0001~3400 s-1 shows that AA6082-T6 is rather insensitive to strain rates in general. However, clear rate sensitivity was observed in the range of 0.001~1 s-1 while such a characteristic is counteracted by the adiabatic heating of specimens under high strain rates. A Johnson-Cook constitutive model was proposed based on tensile tests at different strain rates. In this study, the average stress triaxiality and equivalent plastic strain at facture obtained from numerical simulations were used for the calibration of J-C fracture model. Both of the J-C constitutive model and fracture model were employed in numerical simulations and the results was compared with experimental results. The calibrated J-C fracture model exhibits higher accuracy than the J-C fracture model obtained by the common method in predicting the fracture behavior of AA6082-T6. Finally, the Scanning Electron Microscope (SEM) of fractured specimens with different initial stress triaxialities were analyzed. The magnified fractographs indicate that high initial stress triaxiality likely results in dimple fracture.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...
2016-12-15
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
A Method to Test Model Calibration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
A Method to Test Model Calibration Techniques: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
High temperature blackbody BB2000/40 for calibration of radiation thermometers and thermocouple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogarev, S. A.; Khlevnoy, B. B.; Samoylov, M. L.
2013-09-11
The cavity-type high temperature blackbody (HTBB) models of BB3200/3500 series are the most spread among metrological institutes worldwide as sources for radiometry and radiation thermometry, due to their ultra high working temperatures, high emissivity and stability. The materials of radiating cavities are graphite, pyrolytic graphite (PG) and their combination. The paper describes BB2000/40 blackbody with graphite-tube cavity that was developed for calibration of radiation thermometers at SCEI (Singapore). The peculiarity of BB2000/40 is a possibility to use it, besides calibration of pyrometers, as an instrument for thermocouples calibration. Operating within the temperature range from 900 °C to 2000 °C, themore » blackbody has a wide cavity opening of 40 mm. Emissivity of the cavity, with PG heater rings replaced partly by graphite elements, was estimated as 0.998 ± 0.0015 in the spectral range from 350 nm to 2000 nm. The uniformity along the cavity axis, accounting for 10 °C, was measured using a B-type thermocouple at 1500 °C. The BB2000/40, if necessary, can be easily modified, by replacing the graphite radiator with a set of PG rings, to be able to reach temperatures as high as 3200 °C. The HTBB utilizes an optical feedback system which allows temperature stabilization within 0.1 °C. This rear-view feedback allows the whole HTBB aperture to be used for measurements.« less
NASA Astrophysics Data System (ADS)
Bau, Sébastien; Toussaint, André; Payet, Raphaël; Witschger, Olivier
2017-06-01
Strategies for measuring occupational exposure to aerosols composed of nanoparticles and/or ultrafine particles highlight the use of techniques for determining airborne-particle number concentration as well as number size distribution. The objective of the present work was to set up a system for conducting laboratory verification campaigns of condensation particle counters (CPCs). Providing intercomparison data as well as calibrating and checking CPCs are among the key elements in ensuring reliable laboratory or field measurement campaigns. For this purpose, the reproducible aerosol source “Calibration Tool”, initially developed by the Fraunhofer ITEM, was acquired by the Laboratory of Aerosol Metrology at INRS. As a first part of this study, a detailed characterization of the Calibration Tool developed at the laboratory is the subject of the parametric study presented here. The complete installation is named the “DCC” for “Device for Counter Check”. Used in combination with a reference counter, the DCC can now be used for routine laboratory measurements. Unlike that used for primary calibration of a CPC, the proposed protocol allows a wide range of number concentrations and particle sizes to be investigated and reproduced. The second part of this work involves comparison of the number concentrations measured by several models of CPC in parallel at the exit of a flow splitter, with respect to a reference.
A new method to calibrate Lagrangian model with ASAR images for oil slick trajectory.
Tian, Siyu; Huang, Xiaoxia; Li, Hongga
2017-03-15
Since Lagrangian model coefficients vary with different conditions, it is necessary to calibrate the model to obtain optimal coefficient combination for special oil spill accident. This paper focuses on proposing a new method to calibrate Lagrangian model with time series of Envisat ASAR images. Oil slicks extracted from time series images form a detected trajectory of special oil slick. Lagrangian model is calibrated by minimizing the difference between simulated trajectory and detected trajectory. mean center position distance difference (MCPD) and rotation difference (RD) of Oil slicks' or particles' standard deviational ellipses (SDEs) are calculated as two evaluations. The two parameters are taken to evaluate the performance of Lagrangian transport model with different coefficient combinations. This method is applied to Penglai 19-3 oil spill accident. The simulation result with calibrated model agrees well with related satellite observations. It is suggested the new method is effective to calibrate Lagrangian model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Method and apparatus for multiple-projection, dual-energy x-ray absorptiometry scanning
NASA Technical Reports Server (NTRS)
Feldmesser, Howard S. (Inventor); Magee, Thomas C. (Inventor); Charles, Jr., Harry K. (Inventor); Beck, Thomas J. (Inventor)
2007-01-01
Methods and apparatuses for advanced, multiple-projection, dual-energy X-ray absorptiometry scanning systems include combinations of a conical collimator; a high-resolution two-dimensional detector; a portable, power-capped, variable-exposure-time power supply; an exposure-time control element; calibration monitoring; a three-dimensional anti-scatter-grid; and a gantry-gantry base assembly that permits up to seven projection angles for overlapping beams. Such systems are capable of high precision bone structure measurements that can support three dimensional bone modeling and derivations of bone strength, risk of injury, and efficacy of countermeasures among other properties.
Automatic Calibration of a Semi-Distributed Hydrologic Model Using Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Bekele, E. G.; Nicklow, J. W.
2005-12-01
Hydrologic simulation models need to be calibrated and validated before using them for operational predictions. Spatially-distributed hydrologic models generally have a large number of parameters to capture the various physical characteristics of a hydrologic system. Manual calibration of such models is a very tedious and daunting task, and its success depends on the subjective assessment of a particular modeler, which includes knowledge of the basic approaches and interactions in the model. In order to alleviate these shortcomings, an automatic calibration model, which employs an evolutionary optimization technique known as Particle Swarm Optimizer (PSO) for parameter estimation, is developed. PSO is a heuristic search algorithm that is inspired by social behavior of bird flocking or fish schooling. The newly-developed calibration model is integrated to the U.S. Department of Agriculture's Soil and Water Assessment Tool (SWAT). SWAT is a physically-based, semi-distributed hydrologic model that was developed to predict the long term impacts of land management practices on water, sediment and agricultural chemical yields in large complex watersheds with varying soils, land use, and management conditions. SWAT was calibrated for streamflow and sediment concentration. The calibration process involves parameter specification, whereby sensitive model parameters are identified, and parameter estimation. In order to reduce the number of parameters to be calibrated, parameterization was performed. The methodology is applied to a demonstration watershed known as Big Creek, which is located in southern Illinois. Application results show the effectiveness of the approach and model predictions are significantly improved.
Study on on-machine defects measuring system on high power laser optical elements
NASA Astrophysics Data System (ADS)
Luo, Chi; Shi, Feng; Lin, Zhifan; Zhang, Tong; Wang, Guilin
2017-10-01
The influence of surface defects on high power laser optical elements will cause some harm to the performances of imaging system, including the energy consumption and the damage of film layer. To further increase surface defects on high power laser optical element, on-machine defects measuring system was investigated. Firstly, the selection and design are completed by the working condition analysis of the on-machine defects detection system. By designing on processing algorithms to realize the classification recognition and evaluation of surface defects. The calibration experiment of the scratch was done by using the self-made standard alignment plate. Finally, the detection and evaluation of surface defects of large diameter semi-cylindrical silicon mirror are realized. The calibration results show that the size deviation is less than 4% that meet the precision requirement of the detection of the defects. Through the detection of images the on-machine defects detection system can realize the accurate identification of surface defects.
LIBS coupled with ICP/OES for the spectral analysis of betel leaves
NASA Astrophysics Data System (ADS)
Rehan, I.; Rehan, K.; Sultana, S.; Khan, M. Z.; Muhammad, R.
2018-05-01
Laser-induced breakdown spectroscopy (LIBS) system was optimized and was applied for the elemental analysis and exposure of the heavy metals in betel leaves in air. Pulsed Nd:YAG (1064 nm) in conjunction with a suitable detector (LIBS 2000+, Ocean Optics, Inc) having the optical resolution of 0.06 nm was used to record the emission spectra from 200 to 720 nm. Elements like Al, Ba, Ca, Cr, Cu, P, Fe, K, Mg, Mn, Na, P, S, Sr, and Zn were found to be present in the samples. The abundances of observed elements were calculated through normalized calibration curve method, integrated intensity ratio method, and calibration free-LIBS approach. Quantitative analyses were accomplished under the assumption of local thermodynamic equilibrium (LTE) and optically thin plasma. LIBS findings were validated by comparing its results with the results obtained using a typical analytical technique of inductively coupled plasma-optical emission spectroscopy (ICP/OES). Limit of detection (LOD) of the LIBS system was also estimated for heavy metals.
Design and Lessons Learned on the Development of a Cryogenic Pupil Select Mechanism (PSM)
NASA Technical Reports Server (NTRS)
Mitchell, Alissa L.; Capon, Thomas L.; Hakun, Claef; Haney, Paul; Koca, Corina; Guzek, Jeffrey
2014-01-01
Calibration and testing of the instruments on the Integrated Science Instrument Module (ISIM) of the James Webb Space Telescope (JWST) is being performed by the use of a cryogenic, full-field, optical simulator that was constructed for this purpose. The Pupil Select Mechanism (PSM) assembly is one of several mechanisms and optical elements that compose the Optical Telescope Element SIMulator, or OSIM. The PSM allows for several optical elements to be inserted into the optical plane of OSIM, introducing a variety of aberrations, distortions, obscurations, and other calibration states into the pupil plane. The following discussion focuses on the details of the design evolution, analysis, build, and test of this mechanism along with the challenges associated with creating a sub arc-minute positioning mechanism operating in an extreme cryogenic environment. In addition, difficult challenges in the control system design will be discussed including the incorporation of closed-loop feedback control into a system that was designed to operate in an open-loop fashion.
Boundary-Layer Instability Measurements in a Mach-6 Quiet Tunnel
NASA Technical Reports Server (NTRS)
Berridge, Dennis C.; Ward, Christopher, A. C.; Luersen, Ryan P. K.; Chou, Amanda; Abney, Andrew D.; Schneider, Steven P.
2012-01-01
Several experiments have been performed in the Boeing/AFOSR Mach-6 Quiet Tunnel at Purdue University. A 7 degree half angle cone at 6 degree angle of attack with temperature-sensitive paint (TSP) and PCB pressure transducers was tested under quiet flow. The stationary crossflow vortices appear to break down to turbulence near the lee ray for sufficiently high Reynolds numbers. Attempts to use roughness elements to control the spacing of hot streaks on a flared cone in quiet flow did not succeed. Roughness was observed to damp the second-mode waves in areas influenced by the roughness, and wide roughness spacing allowed hot streaks to form between the roughness elements. A forward-facing cavity was used for proof-of-concept studies for a laser perturber. The lowest density at which the freestream laser perturbations could be detected was 1.07 x 10(exp -2) kilograms per cubic meter. Experiments were conducted to determine the transition characteristics of a streamwise corner flow at hypersonic velocities. Quiet flow resulted in a delayed onset of hot streak spreading. Under low Reynolds number flow hot streak spreading did not occur along the model. A new shock tube has been built at Purdue. The shock tube is designed to create weak shocks suitable for calibrating sensors, particularly PCB-132 sensors. PCB-132 measurements in another shock tube show the shock response and a linear calibration over a moderate pressure range.
A Temperature-Based Gain Calibration Technique for Precision Radiometry
NASA Astrophysics Data System (ADS)
Parashare, Chaitali Ravindra
Detecting extremely weak signals in radio astronomy demands high sensitivity and stability of the receivers. The gain of a typical radio astronomy receiver is extremely large, and therefore, even very small gain instabilities can dominate the received noise power and degrade the instrument sensitivity. Hence, receiver stabilization is of prime importance. Gain variations occur mainly due to ambient temperature fluctuations. We take a new approach to receiver stabilization, which makes use of active temperature monitoring and corrects for the gain fluctuations in post processing. This approach is purely passive and does not include noise injection or switching for calibration. This system is to be used for the Precision Array for Probing the Epoch of Reionization (PAPER), which is being developed to detect the extremely faint neutral hydrogen (HI) signature of the Epoch of Reionization (EoR). The epoch of reionization refers to the period in the history of the Universe when the first stars and galaxies started to form. When there are N antenna elements in the case of a large scale array, all elements may not be subjected to the same environmental conditions at a given time. Hence, we expect to mitigate the gain variations by monitoring the physical temperature of each element of the array. This stabilization approach will also benefit experiments like EDGES (Experiment to Detect the Global EoR Signature) and DARE (Dark Ages Radio Explorer), which involve a direct measurement of the global 21 cm signal using a single antenna element and hence, require an extremely stable system. This dissertation focuses on the development and evaluation of a calibration technique that compensates for the gain variations caused due to temperature fluctuations of the RF components. It carefully examines the temperature dependence of the components in the receiver chain. The results from the first-order field instrument, called a Gainometer (GoM), highlight the issue with the cable temperature which varies significantly with different climatic conditions. The model used to correct for gain variations is presented. We describe the measurements performed to verify the model. RFI is a major issue at low frequencies, which makes these kind of measurements extremely challenging. We discuss the careful measures required to mitigate the errors due to the unwanted interference. In the case of the laboratory measurements, the model follows closely with the measured power, and shows an improvement in the gain stability by a factor of ˜ 46, when the corrections are applied. The gain stability (rms to mean) improves from 1 part in 32 to 1 part in 1500. The field measurements suggest that correcting for cable temperature variations is challenging. The improvement in the gain stability is by a factor of ˜ 4.3, when the RF front end components are situated out in the field. The results are analyzed using the statistical methods such as the standard error of the mean, the run test, skewness, and kurtosis. These tests demonstrate the normal distribution of the process when the corrections are applied and confirm an effective gain bias removal. The results obtained from the sky observation using a single antenna element are compared before and after applying the corrections. Several days data verify that the power fluctuations are significantly reduced after the gain corrections are applied.
NASA Astrophysics Data System (ADS)
Łazarek, Łukasz; Antończak, Arkadiusz J.; Wójcik, Michał R.; Drzymała, Jan; Abramski, Krzysztof M.
2014-07-01
Laser-induced breakdown spectroscopy (LIBS), like many other spectroscopic techniques, is a comparative method. Typically, in qualitative analysis, synthetic certified standard with a well-known elemental composition is used to calibrate the system. Nevertheless, in all laser-induced techniques, such calibration can affect the accuracy through differences in the overall composition of the chosen standard. There are also some intermediate factors, which can cause imprecision in measurements, such as optical absorption, surface structure and thermal conductivity. In this work the calibration performed for the LIBS technique utilizes pellets made directly from the tested materials (old well-characterized samples). This choice produces a considerable improvement in the accuracy of the method. This technique was adopted for the determination of trace elements in industrial copper concentrates, standardized by conventional atomic absorption spectroscopy with a flame atomizer. A series of copper flotation concentrate samples was analyzed for three elements: silver, cobalt and vanadium. We also proposed a method of post-processing the measurement data to minimize matrix effects and permit reliable analysis. It has been shown that the described technique can be used in qualitative and quantitative analyses of complex inorganic materials, such as copper flotation concentrates. It was noted that the final validation of such methodology is limited mainly by the accuracy of the characterization of the standards.
NASA Astrophysics Data System (ADS)
Łazarek, Łukasz; Antończak, Arkadiusz J.; Wójcik, Michał R.; Kozioł, Paweł E.; Stepak, Bogusz; Abramski, Krzysztof M.
2014-08-01
Laser-induced breakdown spectroscopy (LIBS) is a fast, fully optical method, that needs little or no sample preparation. In this technique qualitative and quantitative analysis is based on comparison. The determination of composition is generally based on the construction of a calibration curve namely the LIBS signal versus the concentration of the analyte. Typically, to calibrate the system, certified reference materials with known elemental composition are used. Nevertheless, such samples due to differences in the overall composition with respect to the used complex inorganic materials can influence significantly on the accuracy. There are also some intermediate factors which can cause imprecision in measurements, such as optical absorption, surface structure, thermal conductivity etc. This paper presents the calibration procedure performed with especially prepared pellets from the tested materials, which composition was previously defined. We also proposed methods of post-processing which allowed for mitigation of the matrix effects and for a reliable and accurate analysis. This technique was implemented for determination of trace elements in industrial copper concentrates standardized by conventional atomic absorption spectroscopy with a flame atomizer. A series of copper flotation concentrate samples was analyzed for contents of three elements, that is silver, cobalt and vanadium. It has been shown that the described technique can be used to qualitative and quantitative analyses of complex inorganic materials, such as copper flotation concentrates.
Trace analysis of high-purity graphite by LA-ICP-MS.
Pickhardt, C; Becker, J S
2001-07-01
Laser-ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been established as a very efficient and sensitive technique for the direct analysis of solids. In this work the capability of LA-ICP-MS was investigated for determination of trace elements in high-purity graphite. Synthetic laboratory standards with a graphite matrix were prepared for the purpose of quantifying the analytical results. Doped trace elements, concentration 0.5 microg g(-1), in a laboratory standard were determined with an accuracy of 1% to +/- 7% and a relative standard deviation (RSD) of 2-13%. Solution-based calibration was also used for quantitative analysis of high-purity graphite. It was found that such calibration led to analytical results for trace-element determination in graphite with accuracy similar to that obtained by use of synthetic laboratory standards for quantification of analytical results. Results from quantitative determination of trace impurities in a real reactor-graphite sample, using both quantification approaches, were in good agreement. Detection limits for all elements of interest were determined in the low ng g(-1) concentration range. Improvement of detection limits by a factor of 10 was achieved for analyses of high-purity graphite with LA-ICP-MS under wet plasma conditions, because the lower background signal and increased element sensitivity.
Multi-injector modeling of transverse combustion instability experiments
NASA Astrophysics Data System (ADS)
Shipley, Kevin J.
Concurrent simulations and experiments are used to study combustion instabilities in a multiple injector element combustion chamber. The experiments employ a linear array of seven coaxial injector elements positioned atop a rectangular chamber. Different levels of instability are driven in the combustor by varying the operating and geometry parameters of the outer driving injector elements located near the chamber end-walls. The objectives of the study are to apply a reduced three-injector model to generate a computational test bed for the evaluation of injector response to transverse instability, to apply a full seven-injector model to investigate the inter-element coupling between injectors in response to transverse instability, and to further develop this integrated approach as a key element in a predictive methodology that relies heavily on subscale test and simulation. To measure the effects of the transverse wave on a central study injector element two opposing windows are placed in the chamber to allow optical access. The chamber is extensively instrumented with high-frequency pressure transducers. High-fidelity computational fluid dynamics simulations are used to model the experiment. Specifically three-dimensional, detached eddy simulations (DES) are used. Two computational approaches are investigated. The first approach models the combustor with three center injectors and forces transverse waves in the chamber with a wall velocity function at the chamber side walls. Different levels of pressure oscillation amplitudes are possible by varying the amplitude of the forcing function. The purpose of this method is to focus on the combustion response of the study element. In the second approach, all seven injectors are modeled and self-excited combustion instability is achieved. This realistic model of the chamber allows the study of inter-element flow dynamics, e.g., how the resonant motions in the injector tubes are coupled through the transverse pressure waves in the chamber. The computational results are analyzed and compared with experiment results in the time, frequency and modal domains. Results from the three injector model show how applying different velocity forcing amplitudes change the amplitude and spatial location of heat release from the center injector. The instability amplitudes in the simulation are able to be tuned to experiments and produce similar modal combustion responses of the center injector. The reaction model applied was found to play an important role in the spatial and temporal heat release response. Only when the model was calibrated to ignition delay measurements did the heat release response reflect measurements in the experiment. While insightful the simulations are not truly predictive because the driving frequency and forcing function amplitude are input into the simulation. However, the use of this approach as a tool to investigate combustion response is demonstrated. Results from the seven injector simulations provide an insightful look at the mechanisms driving the instability in the combustor. The instability was studied over a range of pressure fluctuations, up to 70% of mean chamber pressure produced in the self-exited simulation. At low amplitudes the transverse instability was found to be supported by both flame impingement with the side wall as well as vortex shedding at the primary acoustic frequency. As instability level grew the primary supporting mechanism shifted to just vortex impingement on the side walls and the greatest growth was seen as additional vortices began impinging between injector elements at the primary acoustic frequency. This research reveals the advantages and limitations of applying these two modeling techniques to simulate multiple injector experiments. The advantage of the three injector model is a simplified geometry which results in faster model development and the ability to more rapidly study the injector response under varying velocity amplitudes. The possibly faster run time is offset though by the need to run multiple cases to calibrate the model to the experiment. The model is also limited to studying the central injector effect and lacks heat release sources from the outer injectors and additional vortex interactions as shown in the seven injector simulation. The advantage of the seven injector model is that the whole domain can be explored to provide a better understanding about influential processes but does require longer development and run time due to the extensive gridding requirement. Both simulations have proven useful in exploring transverse combustion instability and show the need to further develop subscale experiments and companions simulations in developing a full-scale combustion instability prediction capability.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Parameter Calibration and Numerical Analysis of Twin Shallow Tunnels
NASA Astrophysics Data System (ADS)
Paternesi, Alessandra; Schweiger, Helmut F.; Scarpelli, Giuseppe
2017-05-01
Prediction of displacements and lining stresses in underground openings represents a challenging task. The main reason is primarily related to the complexity of this ground-structure interaction problem and secondly to the difficulties in obtaining a reliable geotechnical characterisation of the soil or the rock. In any case, especially when class A predictions fail in forecasting the system behaviour, performing class B or C predictions, which rely on a higher level of knowledge of the surrounding ground, can represent a useful resource for identifying and reducing model deficiencies. The case study presented in this paper deals with the construction works of twin-tube shallow tunnels excavated in a stiff and fine-grained deposit. The work initially focuses on the ground parameter calibration against experimental data, which together with the choice of an appropriate constitutive model plays a major role in the assessment of tunnelling-induced deformations. Since two-dimensional analyses imply initial assumptions to take into account the effect of the 3D excavation, three-dimensional finite element analyses were preferred. Comparisons between monitoring data and results of numerical simulations are provided. The available field data include displacements and deformation measurements regarding both the ground and tunnel lining.
Hydrological processes and model representation: impact of soft data on calibration
J.G. Arnold; M.A. Youssef; H. Yen; M.J. White; A.Y. Sheshukov; A.M. Sadeghi; D.N. Moriasi; J.L. Steiner; Devendra Amatya; R.W. Skaggs; E.B. Haney; J. Jeong; M. Arabi; P.H. Gowda
2015-01-01
Hydrologic and water quality models are increasingly used to determine the environmental impacts of climate variability and land management. Due to differing model objectives and differences in monitored data, there are currently no universally accepted procedures for model calibration and validation in the literature. In an effort to develop accepted model calibration...
An evaluation of the Johnson-Cook model to simulate puncture of 7075 aluminum plates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corona, Edmundo; Orient, George Edgar
The objective of this project was to evaluate the use of the Johnson-Cook strength and failure models in an adiabatic finite element model to simulate the puncture of 7075- T651 aluminum plates that were studied as part of an ASC L2 milestone by Corona et al (2012). The Johnson-Cook model parameters were determined from material test data. The results show a marked improvement, in particular in the calculated threshold velocity between no puncture and puncture, over those obtained in 2012. The threshold velocity calculated using a baseline model is just 4% higher than the mean value determined from experiment, inmore » contrast to 60% in the 2012 predictions. Sensitivity studies showed that the threshold velocity predictions were improved by calibrating the relations between the equivalent plastic strain at failure and stress triaxiality, strain rate and temperature, as well as by the inclusion of adiabatic heating.« less
Modeling of long-term fatigue damage of soft tissue with stress softening and permanent set effects
Martin, Caitlin; Sun, Wei
2012-01-01
One of the major failure modes of bioprosthetic heart valves is non-calcific structural deterioration due to fatigue of the tissue leaflets. Experimental methods to characterize tissue fatigue properties are complex and time-consuming. A constitutive fatigue model that could be calibrated by isolated material tests would be ideal for investigating the effects of more complex loading conditions. However, there is a lack of tissue fatigue damage models in the literature. To address these limitations, in this study, a phenomenological constitutive model was developed to describe the stress softening and permanent set effects of tissue subjected to long-term cyclic loading. The model was used to capture characteristic uniaxial fatigue data for glutaraldehyde-treated bovine pericardium and was then implemented into finite element software. The simulated fatigue response agreed well with the experimental data and thus demonstrates feasibility of this approach. PMID:22945802
Finite element modelling of AA6063T52 thin-walled tubes under quasi-static axial loading
NASA Astrophysics Data System (ADS)
Othman, A.; Ismail, AE
2018-04-01
The behavior of aluminum alloy 6063T52 thin walled tubes have been present in this paper to determine absorbed energy under quasi-static axial loading. The correlation and comparison have been implemented for each experimental and finite element analysis results, respectively. Wall-thickness of 1.6 and 1.9 mm were selected and all specimen tested under room temperature standard. The length of each specimen were fixed at 125 mm as well as diameter as well as a width and diameter of the tube at 50.8 mm. The two types of tubular cross-section were examined whereas a round and square thin-walled profiles. The specific absorbed energy (SEA) and crush force efficiency (CFE) were analyzed for each specimen and model to see the behavior induced to failure under progressive collapse. Result showed that a correlation less than 5% different between both of comparison experimental and finite element model. It has been found that the thin walled round tube absorbed more energy rather than square profile in term of specific energy with both of either 1.6 or 1.9 of 23.93% and 35.36%, respectively. Overall for crush force efficiency (CFE) of each tube profile around 0.42 to 0.58 value. Indicated that the all specimen profile fail under progressive damage. The calibration between deformed model and experimental specimen were examined and discussed. It was found that the similarity failure mechanism observed for each thin walled profiles.
NASA Astrophysics Data System (ADS)
Meyer, Rena; Engesgaard, Peter; Høyer, Anne-Sophie; Jørgensen, Flemming; Vignoli, Giulio; Sonnenborg, Torben O.
2018-07-01
Low-lying coastal regions are often highly populated, constitute sensitive habitats and are at the same time exposed to challenging hydrological environments due to surface flooding from storm events and saltwater intrusion, which both may affect drinking water supply from shallow and deeper aquifers. Near the Wadden Sea at the border of Southern Denmark and Northern Germany, the hydraulic system (connecting groundwater, river water, and the sea) was altered over centuries (until the 19th century) by e.g. the construction of dikes and drains to prevent flooding and allow agricultural use. Today, massive saltwater intrusions extend up to 20 km inland. In order to understand the regional flow, a methodological approach was developed that combined: (1) a highly-resolved voxel geological model, (2) a ∼1 million node groundwater model with 46 hydrofacies coupled to rivers, drains and the sea, (3) Tikhonov regularization calibration using hydraulic heads and average stream discharges as targets and (4) parameter uncertainty analysis. It is relatively new to use voxel models for constructing geological models that often have been simplified to stacked, pseudo-3D layer geology. The study is therefore one of the first to combine a voxel geological model with state-of-the-art flow calibration techniques. The results show that voxel geological modelling, where lithofacies information are transferred to each volumetric element, is a useful method to preserve 3D geological heterogeneity on a local scale, which is important when distinct geological features such as buried valleys are abundant. Furthermore, it is demonstrated that simpler geological models and simpler calibration methods do not perform as well. The proposed approach is applicable to many other systems, because it combines advanced and flexible geological modelling and flow calibration techniques. This has led to new insights in the regional flow patterns and especially about water cycling in the marsh area near the coast based on the ability to define six predictive scenarios from the linear analysis of parameter uncertainty. The results show that the coastal system near the Danish-German border is mainly controlled by flow in the two aquifers separated by a thick clay layer, and several deep high-permeable buried valleys that connect the sea with the interior and the two aquifers. The drained marsh area acts like a huge regional sink limiting submarine groundwater discharge. With respect to water balance, the greatest sensitivity to parameter uncertainty was observed in the drained marsh area, where some scenarios showed increased flow of sea water into the interior and increased drainage. We speculate that the massive salt water intrusion may be caused by a combination of the preferential pathways provided by the buried valleys, the marsh drainage and relatively high hydraulic conductivities in the two main aquifers as described by one of the scenarios. This is currently under investigation by using a salt water transport model.
Step wise, multiple objective calibration of a hydrologic model for a snowmelt dominated basin
Hay, L.E.; Leavesley, G.H.; Clark, M.P.; Markstrom, S.L.; Viger, R.J.; Umemoto, M.
2006-01-01
The ability to apply a hydrologic model to large numbers of basins for forecasting purposes requires a quick and effective calibration strategy. This paper presents a step wise, multiple objective, automated procedure for hydrologic model calibration. This procedure includes the sequential calibration of a model's simulation of solar radiation (SR), potential evapotranspiration (PET), water balance, and daily runoff. The procedure uses the Shuffled Complex Evolution global search algorithm to calibrate the U.S. Geological Survey's Precipitation Runoff Modeling System in the Yampa River basin of Colorado. This process assures that intermediate states of the model (SR and PET on a monthly mean basis), as well as the water balance and components of the daily hydrograph are simulated, consistently with measured values.
Bricklemyer, Ross S; Brown, David J; Turk, Philip J; Clegg, Sam M
2013-10-01
Laser-induced breakdown spectroscopy (LIBS) provides a potential method for rapid, in situ soil C measurement. In previous research on the application of LIBS to intact soil cores, we hypothesized that ultraviolet (UV) spectrum LIBS (200-300 nm) might not provide sufficient elemental information to reliably discriminate between soil organic C (SOC) and inorganic C (IC). In this study, using a custom complete spectrum (245-925 nm) core-scanning LIBS instrument, we analyzed 60 intact soil cores from six wheat fields. Predictive multi-response partial least squares (PLS2) models using full and reduced spectrum LIBS were compared for directly determining soil total C (TC), IC, and SOC. Two regression shrinkage and variable selection approaches, the least absolute shrinkage and selection operator (LASSO) and sparse multivariate regression with covariance estimation (MRCE), were tested for soil C predictions and the identification of wavelengths important for soil C prediction. Using complete spectrum LIBS for PLS2 modeling reduced the calibration standard error of prediction (SEP) 15 and 19% for TC and IC, respectively, compared to UV spectrum LIBS. The LASSO and MRCE approaches provided significantly improved calibration accuracy and reduced SEP 32-55% over UV spectrum PLS2 models. We conclude that (1) complete spectrum LIBS is superior to UV spectrum LIBS for predicting soil C for intact soil cores without pretreatment; (2) LASSO and MRCE approaches provide improved calibration prediction accuracy over PLS2 but require additional testing with increased soil and target analyte diversity; and (3) measurement errors associated with analyzing intact cores (e.g., sample density and surface roughness) require further study and quantification.
Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation
NASA Astrophysics Data System (ADS)
Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno
2014-05-01
A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides further validation of FSD, the next steps include an extension of the study to different catchments and other hydrological models with a similar structure.
Improved electron probe microanalysis of trace elements in quartz
Donovan, John J.; Lowers, Heather; Rusk, Brian G.
2011-01-01
Quartz occurs in a wide range of geologic environments throughout the Earth's crust. The concentration and distribution of trace elements in quartz provide information such as temperature and other physical conditions of formation. Trace element analyses with modern electron-probe microanalysis (EPMA) instruments can achieve 99% confidence detection of ~100 ppm with fairly minimal effort for many elements in samples of low to moderate average atomic number such as many common oxides and silicates. However, trace element measurements below 100 ppm in many materials are limited, not only by the precision of the background measurement, but also by the accuracy with which background levels are determined. A new "blank" correction algorithm has been developed and tested on both Cameca and JEOL instruments, which applies a quantitative correction to the emitted X-ray intensities during the iteration of the sample matrix correction based on a zero level (or known trace) abundance calibration standard. This iterated blank correction, when combined with improved background fit models, and an "aggregate" intensity calculation utilizing multiple spectrometer intensities in software for greater geometric efficiency, yields a detection limit of 2 to 3 ppm for Ti and 6 to 7 ppm for Al in quartz at 99% t-test confidence with similar levels for absolute accuracy.
NASA Astrophysics Data System (ADS)
Voytek, E. B.; Drenkelfuss, A.; Day-Lewis, F. D.; Healy, R. W.; Lane, J. W.; Werkema, D. D.
2012-12-01
Temperature is a naturally occurring tracer, which can be exploited to infer the movement of water through the vadose and saturated zones, as well as the exchange of water between aquifers and surface-water bodies, such as estuaries, lakes, and streams. One-dimensional (1D) vertical temperature profiles commonly show thermal amplitude attenuation and increasing phase lag of diurnal or seasonal temperature variations with propagation into the subsurface. This behavior is described by the heat-transport equation (i.e., the convection-conduction-dispersion equation), which can be solved analytically in 1D under certain simplifying assumptions (e.g., sinusoidal or steady-state boundary conditions and homogeneous hydraulic and thermal properties). Analysis of 1D temperature profiles using analytical models provides estimates of vertical groundwater/surface-water exchange. The utility of these estimates can be diminished when the model assumptions are violated, as is common in field applications. Alternatively, analysis of 1D temperature profiles using numerical models allows for consideration of more complex and realistic boundary conditions. However, such analyses commonly require model calibration and the development of input files for finite-difference or finite-element codes. To address the calibration and input file requirements, a new computer program, 1DTempPro, is presented that facilitates numerical analysis of vertical 1D temperature profiles. 1DTempPro is a graphical user interface (GUI) to the USGS code VS2DH, which numerically solves the flow- and heat-transport equations. Pre- and post-processor features within 1DTempPro allow the user to calibrate VS2DH models to estimate groundwater/surface-water exchange and hydraulic conductivity in cases where hydraulic head is known. This approach improves groundwater/ surface-water exchange-rate estimates for real-world data with complexities ill-suited for examination with analytical methods. Additionally, the code allows for time-varying temperature and hydraulic boundary conditions. Here, we present the approach and include examples for several datasets from stream/aquifer systems.
1997-09-01
Illinois Institute of Technology Research Institute (IITRI) calibrated seven parametric models including SPQR /20, the forerunner of CHECKPOINT. The...a semicolon); thus, SPQR /20 was calibrated using SLOC sizing data (IITRI, 1989: 3-4). The results showed only slight overall improvements in accuracy...even when validating the calibrated models with the same data sets. The IITRI study demonstrated SPQR /20 to be one of two models that were most
NASA Astrophysics Data System (ADS)
Brown, Staci R.; Akpovo, Charlemagne A.; Martinez, Jorge; Ford, Alan; Herbert, Kenley; Johnson, Lewis
2014-03-01
Laser Induced Breakdown Spectroscopy (LIBS) is a spectroscopic technique that is used for the qualitative and quantitative analysis of materials in the liquid, solid, or gas phase. LIBS can also be used for the detection of isotopic shifts in atomic and diatomic species via Laser-Ablation Molecular Isotopic Spectroscopy (LAMIS). However, any additional elements that are entrained into the plasma other than the element of interest, can affect the extent of ablation and quality of spectra and hence, potentially obscure or aid in the relative abundance assessment for a given element. To address the importance of matrix effects, the isotopic analysis of boron obtained from boron oxide (BO) emission originating from different boron-containing compounds, such as boron nitride (BN), boric acid (H3BO3) , and borax (Na2B4O710H2O), via LIBS has been performed here. Each of these materials has different physical properties and elemental composition in order to illustrate possible challenges for the LAMIS method. A calibration-free model similar to that for the original LAMIS work is used to determine properties of the plasma as the matrix is changed. DTRA
NASA Astrophysics Data System (ADS)
Mauerhofer, E.; Havenith, A.; Carasco, C.; Payan, E.; Kettler, J.; Ma, J. L.; Perot, B.
2013-04-01
The Forschungszentrum Jülich GmbH (FZJ), together with the Aachen University Rheinisch-Westfaelische Technische Hochschule (RWTH) and the French Alternative Energies and Atomic Energy Commission (CEA Cadarache) are involved in a cooperation aiming at characterizing toxic and reactive elements in radioactive waste packages by means of Prompt Gamma Neutron Activation Analysis (PGNAA) [1]. The French and German waste management agencies have indeed defined acceptability limits concerning these elements in view of their projected geological repositories. A first measurement campaign was performed in the new Prompt Gamma Neutron Activation Analysis (PGNAA) facility called MEDINA, at FZJ, to assess the capture gamma-ray signatures of some elements of interest in large samples up to waste drums with a volume of 200 liter. MEDINA is the acronym for Multi Element Detection based on Instrumental Neutron Activation. This paper presents MCNP calculations of the MEDINA facility and quantitative comparison between measurement and simulation. Passive gamma-ray spectra acquired with a high purity germanium detector and calibration sources are used to qualify the numerical model of the crystal. Active PGNAA spectra of a sodium chloride sample measured with MEDINA then allow for qualifying the global numerical model of the measurement cell. Chlorine indeed constitutes a usual reference with reliable capture gamma-ray production data. The goal is to characterize the entire simulation protocol (geometrical model, nuclear data, and postprocessing tools) which will be used for current measurement interpretation, extrapolation of the performances to other types of waste packages or other applications, as well as for the study of future PGNAA facilities.
The Influence of Oxygen and Sulfur on Uranium Partitioning Into the Core
NASA Astrophysics Data System (ADS)
Moore, R. D., Jr.; Van Orman, J. A.; Hauck, S. A., II
2017-12-01
Uranium, along with K and Th, may provide substantial long-term heating in planetary cores, depending on the magnitude of their partitioning into the metal during differentiation. In general, non-metallic light elements are known to have a large influence on the partitioning of trace elements, and the presence of sulfur is known to enhance the partitioning of uranium into the metal. Data from the steelmaking literature indicate that oxygen also enhances the solubility of oxygen in liquid iron alloys. Here we present experimental data on the partitioning of U between immiscible liquids in the Fe-S-O system, and use these data along with published metal-silicate partitioning data to calibrate a quantitative activity model for U in the metal. We also determined partition coefficients for Th, K, Nb, Nd, Sm, and Yb, but were unable to fully constrain activity models for these elements with available data. A Monte Carlo fitting routine was used to calculate U-S, U-O, and U-S-O interaction coefficients, and their associated uncertainties. We find that the combined interaction of uranium with sulfur and oxygen is predominant, with S and O together enhancing the solubility of uranium to a far greater degree than either element in isolation. This suggests that uranium complexes with sulfite or sulfate species in the metal. For a model Mars core composition containing 14 at% S and 5 at% O, the metal/silicate partition coefficient for U is predicted to be an order of magnitude larger than for a pure Fe-Ni core.
Gaudiuso, Rosalba; Dell’Aglio, Marcella; De Pascale, Olga; Senesi, Giorgio S.; De Giacomo, Alessandro
2010-01-01
Analytical applications of Laser Induced Breakdown Spectroscopy (LIBS), namely optical emission spectroscopy of laser-induced plasmas, have been constantly growing thanks to its intrinsic conceptual simplicity and versatility. Qualitative and quantitative analysis can be performed by LIBS both by drawing calibration lines and by using calibration-free methods and some of its features, so as fast multi-elemental response, micro-destructiveness, instrumentation portability, have rendered it particularly suitable for analytical applications in the field of environmental science, space exploration and cultural heritage. This review reports and discusses LIBS achievements in these areas and results obtained for soils and aqueous samples, meteorites and terrestrial samples simulating extraterrestrial planets, and cultural heritage samples, including buildings and objects of various kinds. PMID:22163611
The calibration of an HF radar used for ionospheric research
NASA Astrophysics Data System (ADS)
From, W. R.; Whitehead, J. D.
1984-02-01
The HF radar on Bribie Island, Australia, uses crossed-fan beams produced by crossed linear transmitter and receiver arrays of 10 elements each to simulate a pencil beam. The beam points vertically when all the array elements are in phase, and is steerable by up to 20 deg off vertical at the central one of the three operating frequencies. Phase and gain changes within the transmitters and receivers are compensated for by an automatic system of adjustment. The 10 transmitting antennas are, as nearly as possible, physically identical as are the 10 receiving antennas. Antenna calibration using high flying aircraft or satellites is not possible. A method is described for using the ionospheric reflections to measure the polar diagram and also to correct for errors in the direction of pointing.
Gaudiuso, Rosalba; Dell'Aglio, Marcella; De Pascale, Olga; Senesi, Giorgio S; De Giacomo, Alessandro
2010-01-01
Analytical applications of Laser Induced Breakdown Spectroscopy (LIBS), namely optical emission spectroscopy of laser-induced plasmas, have been constantly growing thanks to its intrinsic conceptual simplicity and versatility. Qualitative and quantitative analysis can be performed by LIBS both by drawing calibration lines and by using calibration-free methods and some of its features, so as fast multi-elemental response, micro-destructiveness, instrumentation portability, have rendered it particularly suitable for analytical applications in the field of environmental science, space exploration and cultural heritage. This review reports and discusses LIBS achievements in these areas and results obtained for soils and aqueous samples, meteorites and terrestrial samples simulating extraterrestrial planets, and cultural heritage samples, including buildings and objects of various kinds.
Calibrating Detailed Chemical Analysis of M dwarfs
NASA Astrophysics Data System (ADS)
Veyette, Mark; Muirhead, Philip Steven; Mann, Andrew; Brewer, John; Allard, France; Homeier, Derek
2018-01-01
The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications including studying the chemical evolution of the Galaxy, assessing membership in stellar kinematic groups, and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres has hindered similar analysis of M-dwarf stars. Large surveys of FGK abundances play an important role in developing methods to measure the compositions of M dwarfs by providing benchmark FGK stars that have widely-separated M dwarf companions. These systems allow us to empirically calibrate metallicity-sensitive features in M dwarf spectra. However, current methods to measure metallicity in M dwarfs from moderate-resolution spectra are limited to measuring overall metallicity and largely rely on astrophysical abundance correlations in stellar populations. In this talk, I will discuss how large, homogeneous catalogs of precise FGK abundances are crucial to advancing chemical analysis of M dwarfs beyond overall metallicity to direct measurements of individual elemental abundances. I will present a new method to analyze high-resolution, NIR spectra of M dwarfs that employs an empirical calibration of synthetic M dwarf spectra to infer effective temperature, Fe abundance, and Ti abundance. This work is a step toward detailed chemical analysis of M dwarfs at a similar precision achieved for FGK stars.
S. Wang; Z. Zhang; G. Sun; P. Strauss; J. Guo; Y. Tang; A. Yao
2012-01-01
Model calibration is essential for hydrologic modeling of large watersheds in a heterogeneous mountain environment. Little guidance is available for model calibration protocols for distributed models that aim at capturing the spatial variability of hydrologic processes. This study used the physically-based distributed hydrologic model, MIKE SHE, to contrast a lumped...
Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates
NASA Astrophysics Data System (ADS)
Todorovic, Andrijana; Plavsic, Jasna
2015-04-01
A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.
INFLUENCE OF MATERIAL MODELS ON PREDICTING THE FIRE BEHAVIOR OF STEEL COLUMNS.
Choe, Lisa; Zhang, Chao; Luecke, William E; Gross, John L; Varma, Amit H
2017-01-01
Finite-element (FE) analysis was used to compare the high-temperature responses of steel columns with two different stress-strain models: the Eurocode 3 model and the model proposed by National Institute of Standards and Technology (NIST). The comparisons were made in three different phases. The first phase compared the critical buckling temperatures predicted using forty seven column data from five different laboratories. The slenderness ratios varied from 34 to 137, and the applied axial load was 20-60 % of the room-temperature capacity. The results showed that the NIST model predicted the buckling temperature as or more accurately than the Eurocode 3 model for four of the five data sets. In the second phase, thirty unique FE models were developed to analyze the W8×35 and W14×53 column specimens with the slenderness ratio about 70. The column specimens were tested under steady-heating conditions with a target temperature in the range of 300-600 °C. The models were developed by combining the material model, temperature distributions in the specimens, and numerical scheme for non-linear analyses. Overall, the models with the NIST material properties and the measured temperature variations showed the results comparable to the test data. The deviations in the results from two different numerical approaches (modified Newton Raphson vs. arc-length) were negligible. The Eurocode 3 model made conservative predictions on the behavior of the column specimens since its retained elastic moduli are smaller than those of the NIST model at elevated temperatures. In the third phase, the column curves calibrated using the NIST model was compared with those prescribed in the ANSI/AISC-360 Appendix 4. The calibrated curve significantly deviated from the current design equation with increasing temperature, especially for the slenderness ratio from 50 to 100.
Quantitative Electron Probe Microanalysis: State of the Art
NASA Technical Reports Server (NTRS)
Carpernter, P. K.
2005-01-01
Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.
Todorov, Todor I.; Wolf, Ruth E.; Adams, Monique
2014-01-01
Typically, 27 major, minor, and trace elements are determined in natural waters, acid mine drainage, extraction fluids, and leachates of geological and environmental samples by inductively coupled plasma-optical emission spectrometry (ICP-OES). At the discretion of the analyst, additional elements may be determined after suitable method modifications and performance data are established. Samples are preserved in 1–2 percent nitric acid (HNO3) at sample collection or as soon as possible after collection. The aqueous samples are aspirated into the ICP-OES discharge, where the elemental emission signals are measured simultaneously for 27 elements. Calibration is performed with a series of matrix-matched, multi-element solution standards.
Lichte, F.E.
1995-01-01
A new method of analysis for rocks and soils is presented using laser ablation inductively coupled plasma mass spectrometry. It is based on a lithium borate fusion and the free-running mode of a Nd/YAG laser. An Ar/N2 sample gas improves sensitivity 7 ?? for most elements. Sixty-three elements are characterized for the fusion, and 49 elements can be quantified. Internal standards and isotopic spikes ensure accurate results. Limits of detection are 0.01 ??g/g for many trace elements. Accuracy approaches 5% for all elements. A new quality assurance procedure is presented that uses fundamental parameters to test relative response factors for the calibration.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
DEM Calibration Approach: design of experiment
NASA Astrophysics Data System (ADS)
Boikov, A. V.; Savelev, R. V.; Payor, V. A.
2018-05-01
The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.
Xiu, Junshan; Liu, Shiming; Sun, Meiling; Dong, Lili
2018-01-20
The photoelectric performance of metal ion-doped TiO 2 film will be improved with the changing of the compositions and concentrations of additive elements. In this work, the TiO 2 films doped with different Sn concentrations were obtained with the hydrothermal method. Qualitative and quantitative analysis of the Sn element in TiO 2 film was achieved with laser induced breakdown spectroscopy (LIBS) with the calibration curves plotted accordingly. The photoelectric characteristics of TiO 2 films doped with different Sn content were observed with UV visible absorption spectra and J-V curves. All results showed that Sn doping could improve the optical absorption to be red-shifted and advance the photoelectric properties of the TiO 2 films. We had obtained that when the concentration of Sn doping in TiO 2 films was 11.89 mmol/L, which was calculated by the LIBS calibration curves, the current density of the film was the largest, which indicated the best photoelectric performance. It indicated that LIBS was a potential and feasible measured method, which was applied to qualitative and quantitative analysis of the additive element in metal oxide nanometer film.
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
Root zone water quality model (RZWQM2): Model use, calibration and validation
Ma, Liwang; Ahuja, Lajpat; Nolan, B.T.; Malone, Robert; Trout, Thomas; Qi, Z.
2012-01-01
The Root Zone Water Quality Model (RZWQM2) has been used widely for simulating agricultural management effects on crop production and soil and water quality. Although it is a one-dimensional model, it has many desirable features for the modeling community. This article outlines the principles of calibrating the model component by component with one or more datasets and validating the model with independent datasets. Users should consult the RZWQM2 user manual distributed along with the model and a more detailed protocol on how to calibrate RZWQM2 provided in a book chapter. Two case studies (or examples) are included in this article. One is from an irrigated maize study in Colorado to illustrate the use of field and laboratory measured soil hydraulic properties on simulated soil water and crop production. It also demonstrates the interaction between soil and plant parameters in simulated plant responses to water stresses. The other is from a maize-soybean rotation study in Iowa to show a manual calibration of the model for crop yield, soil water, and N leaching in tile-drained soils. Although the commonly used trial-and-error calibration method works well for experienced users, as shown in the second example, an automated calibration procedure is more objective, as shown in the first example. Furthermore, the incorporation of the Parameter Estimation Software (PEST) into RZWQM2 made the calibration of the model more efficient than a grid (ordered) search of model parameters. In addition, PEST provides sensitivity and uncertainty analyses that should help users in selecting the right parameters to calibrate.
A New Calibration Method for Commercial RGB-D Sensors.
Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu
2017-05-24
Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallimore, David L.
2012-06-13
The measurement uncertainty estimatino associated with trace element analysis of impurities in U and Pu was evaluated using the Guide to the Expression of Uncertainty Measurement (GUM). I this evalution the uncertainty sources were identified and standard uncertainties for the components were categorized as either Type A or B. The combined standard uncertainty was calculated and a coverage factor k = 2 was applied to obtain the expanded uncertainty, U. The ICP-AES and ICP-MS methods used were deveoped for the multi-element analysis of U and Pu samples. A typical analytical run consists of standards, process blanks, samples, matrix spiked samples,more » post digestion spiked samples and independent calibration verification standards. The uncertainty estimation was performed on U and Pu samples that have been analyzed previously as part of the U and Pu Sample Exchange Programs. Control chart results and data from the U and Pu metal exchange programs were combined with the GUM into a concentration dependent estimate of the expanded uncertainty. Comparison of trace element uncertainties obtained using this model was compared to those obtained for trace element results as part of the Exchange programs. This process was completed for all trace elements that were determined to be above the detection limit for the U and Pu samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jamshidian, M., E-mail: jamshidian@cc.iut.ac.ir; Institute of Structural Mechanics, Bauhaus-University Weimar, Marienstrasse 15, 99423 Weimar; Thamburaja, P., E-mail: prakash.thamburaja@gmail.com
A previously-developed finite-deformation- and crystal-elasticity-based constitutive theory for stressed grain growth in cubic polycrystalline bodies has been augmented to include a description of excess surface energy and grain-growth stagnation mechanisms through the use of surface effect state variables in a thermodynamically-consistent manner. The constitutive theory was also implemented into a multiscale coupled finite-element and phase-field computational framework. With the material parameters in the constitutive theory suitably calibrated, our three-dimensional numerical simulations show that the constitutive model is able to accurately predict the experimentally-determined evolution of crystallographic texture and grain size statistics in polycrystalline copper thin films deposited on polyimide substratemore » and annealed at high-homologous temperatures. In particular, our numerical analyses show that the broad texture transition observed in the annealing experiments of polycrystalline thin films is caused by grain growth stagnation mechanisms. - Graphical abstract: - Highlights: • Developing a theory for stressed grain growth in polycrystalline thin films. • Implementation into a multiscale coupled finite-element and phase-field framework. • Quantitative reproduction of the experimental grain growth data by simulations. • Revealing the cause of texture transition to be due to the stagnation mechanisms.« less
Mammographic x-ray unit kilovoltage test tool based on k-edge absorption effect.
Napolitano, Mary E; Trueblood, Jon H; Hertel, Nolan E; David, George
2002-09-01
A simple tool to determine the peak kilovoltage (kVp) of a mammographic x-ray unit has been designed. Tool design is based on comparing the effect of k-edge discontinuity of the attenuation coefficient for a series of element filters. Compatibility with the mammography accreditation phantom (MAP) to obtain a single quality control film is a second design objective. When the attenuation of a series of sequential elements is studied simultaneously, differences in the absorption characteristics due to the k-edge discontinuities are more evident. Specifically, when the incident photon energy is higher than the k-edge energy of a number of the elements and lower than the remainder, an inflection may be seen in the resulting attenuation data. The maximum energy of the incident photon spectra may be determined based on this inflection point for a series of element filters. Monte Carlo photon transport analysis was used to estimate the photon transmission probabilities for each of the sequential k-edge filter elements. The photon transmission corresponds directly to optical density recorded on mammographic x-ray film. To observe the inflection, the element filters chosen must have k-edge energies that span a range greater than the expected range of the end point energies to be determined. For the design, incident x-ray spectra ranging from 25 to 40 kVp were assumed to be from a molybdenum target. Over this range, the k-edge energy changes by approximately 1.5 keV between sequential elements. For this design 21 elements spanning an energy range from 20 to 50 keV were chosen. Optimum filter element thicknesses were calculated to maximize attenuation differences at the k-edge while maintaining optical densities between 0.10 and 3.00. Calculated relative transmission data show that the kVp could be determined to within +/-1 kV. To obtain experimental data, a phantom was constructed containing 21 different elements placed in an acrylic holder. MAP images were used to determine appropriate exposure techniques for a series of end point energies from 25 to 35 kVp. The average difference between the kVp determination and the calibrated dial setting was 0.8 and 1.0 kV for a Senographe 600 T and a Senographe DMR, respectively. Since the k-edge absorption energies of the filter materials are well known, independent calibration or a series of calibration curves is not required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Addair, Travis; Barno, Justin; Dodge, Doug
CCT is a Java based application for calibrating 10 shear wave coda measurement models to observed data using a much smaller set of reference moment magnitudes (MWs) calculated from other means (waveform modeling, etc.). These calibrated measurement models can then be used in other tools to generate coda moment magnitude measurements, source spectra, estimated stress drop, and other useful measurements for any additional events and any new data collected in the calibrated region.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 microns, but the difference increases to approximately 10 percent at diameters of 50 microns. When using glass beads to calibrate the FSSP, calibration errors can be minimized by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.; Lock, James A.
1993-01-01
Scattering calculations using a more detailed model of the multimode laser beam in the forward-scattering spectrometer probe (FSSP) were carried out by using a recently developed extension to Mie scattering theory. From this model, new calibration curves for the FSSP were calculated. The difference between the old calibration curves and the new ones is small for droplet diameters less than 10 micrometers, but the difference increases to approximately 10% at diameters of 50 micrometers. When using glass beads to calibrate the FSSP, calibration errors can be minimized, by using glass beads of many different diameters, over the entire range of the FSSP. If the FSSP is calibrated using one-diameter glass beads, then the new formalism is necessary to extrapolate the calibration over the entire range.
Application of firefly algorithm to the dynamic model updating problem
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2015-04-01
Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2017-04-01
The recent shift towards development of clean, sustainable energy sources has provided a new challenge in terms of structural safety and reliability: with aging, manufacturing defects, harsh environmental and operational conditions, and extreme events such as lightning strikes wind turbines can become damaged resulting in production losses and environmental degradation. To monitor the current structural state of the turbine, structural health monitoring (SHM) techniques would be beneficial. Physics based SHM in the form of calibration of a finite element model (FEMs) by inverse techniques is adopted in this research. Fuzzy finite element model updating (FFEMU) techniques for damage severity assessment of a small-scale wind turbine blade are discussed and implemented. The main advantage is the ability of FFEMU to account in a simple way for uncertainty within the problem of model updating. Uncertainty quantification techniques, such as fuzzy sets, enable a convenient mathematical representation of the various uncertainties. Experimental frequencies obtained from modal analysis on a small-scale wind turbine blade were described by fuzzy numbers to model measurement uncertainty. During this investigation, damage severity estimation was investigated through addition of small masses of varying magnitude to the trailing edge of the structure. This structural modification, intended to be in lieu of damage, enabled non-destructive experimental simulation of structural change. A numerical model was constructed with multiple variable additional masses simulated upon the blades trailing edge and used as updating parameters. Objective functions for updating were constructed and minimized using both particle swarm optimization algorithm and firefly algorithm. FFEMU was able to obtain a prediction of baseline material properties of the blade whilst also successfully predicting, with sufficient accuracy, a larger magnitude of structural alteration and its location.
NASA Technical Reports Server (NTRS)
Kurucz, Robert L.
1996-01-01
I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity, and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration. I have also developed a new opacity-sampling version of my model atmosphere program called ATLAS12. It recognizes more than 1000 atomic and molecular species, each in up to 10 isotopic forms. It can treat all ions of the elements up through Zn and the first 5 ions of heavier elements up through Es. The elemental and isotopic abundances are treated as variables with depth. The fluxes predicted by ATLAS12 are not accurate in intermediate or narrow bandpass intervals because the sample size is too small. A special stripped version of the spectrum synthesis program SYNTHE is used to generate the surface flux for the converged model using the line data on CD-ROMs 1 and 15. ATLAS12 can be used to produce improved models for Am and Ap stars. It should be very useful for investigating diffusion effects in atmospheres. It can be used to model exciting stars for H II regions with abundances consistent with those of the H II region. These programs and line files will be distributed on CD-ROMs.
Wavelength calibration with PMAS at 3.5 m Calar Alto Telescope using a tunable astro-comb
NASA Astrophysics Data System (ADS)
Chavez Boggio, J. M.; Fremberg, T.; Bodenmüller, D.; Sandin, C.; Zajnulina, M.; Kelz, A.; Giannone, D.; Rutowska, M.; Moralejo, B.; Roth, M. M.; Wysmolek, M.; Sayinc, H.
2018-05-01
On-sky tests conducted with an astro-comb using the Potsdam Multi-Aperture Spectrograph (PMAS) at the 3.5 m Calar Alto Telescope are reported. The proposed astro-comb approach is based on cascaded four-wave mixing between two lasers propagating through dispersion optimized nonlinear fibers. This approach allows for a line spacing that can be continuously tuned over a broad range (from tens of GHz to beyond 1 THz) making it suitable for calibration of low- medium- and high-resolution spectrographs. The astro-comb provides 300 calibration lines and his line-spacing is tracked with a wavemeter having 0.3 pm absolute accuracy. First, we assess the accuracy of Neon calibration by measuring the astro-comb lines with (Neon calibrated) PMAS. The results are compared with expected line positions from wavemeter measurement showing an offset of ∼5-20 pm (4%-16% of one resolution element). This might be the footprint of the accuracy limits from actual Neon calibration. Then, the astro-comb performance as a calibrator is assessed through measurements of the Ca triplet from stellar objects HD3765 and HD219538 as well as with the sky line spectrum, showing the advantage of the proposed astro-comb for wavelength calibration at any resolution.
USDA-ARS?s Scientific Manuscript database
The progressive improvement of computer science and development of auto-calibration techniques means that calibration of simulation models is no longer a major challenge for watershed planning and management. Modelers now increasingly focus on challenges such as improved representation of watershed...
A diffusion-limited reaction model for self-propagating Al/Pt multilayers with quench limits
NASA Astrophysics Data System (ADS)
Kittell, D. E.; Yarrington, C. D.; Hobbs, M. L.; Abere, M. J.; Adams, D. P.
2018-04-01
A diffusion-limited reaction model was calibrated for Al/Pt multilayers ignited on oxidized silicon, sapphire, and tungsten substrates, as well as for some Al/Pt multilayers ignited as free-standing foils. The model was implemented in a finite element analysis code and used to match experimental burn front velocity data collected from several years of testing at Sandia National Laboratories. Moreover, both the simulations and experiments reveal well-defined quench limits in the total Al + Pt layer (i.e., bilayer) thickness. At these limits, the heat generated from atomic diffusion is insufficient to support a self-propagating wave front on top of the substrates. Quench limits for reactive multilayers are seldom reported and are found to depend on the thermal properties of the individual layers. Here, the diffusion-limited reaction model is generalized to allow for temperature- and composition-dependent material properties, phase change, and anisotropic thermal conductivity. Utilizing this increase in model fidelity, excellent overall agreement is shown between the simulations and experimental results with a single calibrated parameter set. However, the burn front velocities of Al/Pt multilayers ignited on tungsten substrates are over-predicted. Possible sources of error are discussed and a higher activation energy (from 41.9 kJ/mol.at. to 47.5 kJ/mol.at.) is shown to bring the simulations into agreement with the velocity data observed on tungsten substrates. This higher activation energy suggests an inhibited diffusion mechanism present at lower heating rates.
Unnikrishnan, Ginu U.; Morgan, Elise F.
2011-01-01
Inaccuracies in the estimation of material properties and errors in the assignment of these properties into finite element models limit the reliability, accuracy, and precision of quantitative computed tomography (QCT)-based finite element analyses of the vertebra. In this work, a new mesh-independent, material mapping procedure was developed to improve the quality of predictions of vertebral mechanical behavior from QCT-based finite element models. In this procedure, an intermediate step, called the material block model, was introduced to determine the distribution of material properties based on bone mineral density, and these properties were then mapped onto the finite element mesh. A sensitivity study was first conducted on a calibration phantom to understand the influence of the size of the material blocks on the computed bone mineral density. It was observed that varying the material block size produced only marginal changes in the predictions of mineral density. Finite element (FE) analyses were then conducted on a square column-shaped region of the vertebra and also on the entire vertebra in order to study the effect of material block size on the FE-derived outcomes. The predicted values of stiffness for the column and the vertebra decreased with decreasing block size. When these results were compared to those of a mesh convergence analysis, it was found that the influence of element size on vertebral stiffness was less than that of the material block size. This mapping procedure allows the material properties in a finite element study to be determined based on the block size required for an accurate representation of the material field, while the size of the finite elements can be selected independently and based on the required numerical accuracy of the finite element solution. The mesh-independent, material mapping procedure developed in this study could be particularly helpful in improving the accuracy of finite element analyses of vertebroplasty and spine metastases, as these analyses typically require mesh refinement at the interfaces between distinct materials. Moreover, the mapping procedure is not specific to the vertebra and could thus be applied to many other anatomic sites. PMID:21823740
Measurement and calibration of differential Mueller matrix of distributed targets
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1992-01-01
A rigorous method for calibrating polarimetric backscatter measurements of distributed targets is presented. By characterizing the radar distortions over the entire mainlobe of the antenna, the differential Mueller matrix is derived from the measured scattering matrices with a high degree of accuracy. It is shown that the radar distortions can be determined by measuring the polarimetric response of a metallic sphere over the main lobe of the antenna. Comparison of results obtained with the new algorithm with the results derived from the old calibration method show that the discrepancy between the two methods is less than 1 dB for the backscattering coefficients. The discrepancy is more drastic for the phase-difference statistics, indicating that removal of the radar distortions from the cross products of the scattering matrix elements cannot be accomplished with the traditional calibration methods.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
NASA Astrophysics Data System (ADS)
Wang, Mi; Fang, Chengcheng; Yang, Bo; Cheng, Yufeng
2016-06-01
The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite's real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.
Zhan, Xue-yan; Zhao, Na; Lin, Zhao-zhou; Wu, Zhi-sheng; Yuan, Rui-juan; Qiao, Yan-jiang
2014-12-01
The appropriate algorithm for calibration set selection was one of the key technologies for a good NIR quantitative model. There are different algorithms for calibration set selection, such as Random Sampling (RS) algorithm, Conventional Selection (CS) algorithm, Kennard-Stone(KS) algorithm and Sample set Portioning based on joint x-y distance (SPXY) algorithm, et al. However, there lack systematic comparisons between two algorithms of the above algorithms. The NIR quantitative models to determine the asiaticoside content in Centella total glucosides were established in the present paper, of which 7 indexes were classified and selected, and the effects of CS algorithm, KS algorithm and SPXY algorithm for calibration set selection on the accuracy and robustness of NIR quantitative models were investigated. The accuracy indexes of NIR quantitative models with calibration set selected by SPXY algorithm were significantly different from that with calibration set selected by CS algorithm or KS algorithm, while the robustness indexes, such as RMSECV and |RMSEP-RMSEC|, were not significantly different. Therefore, SPXY algorithm for calibration set selection could improve the predicative accuracy of NIR quantitative models to determine asiaticoside content in Centella total glucosides, and have no significant effect on the robustness of the models, which provides a reference to determine the appropriate algorithm for calibration set selection when NIR quantitative models are established for the solid system of traditional Chinese medcine.