DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartig, Kyle C.; Ghebregziabher, Isaac; Jovanovic, Igor
The ability to perform not only elementally but also isotopically sensitive detection and analysis at standoff distances is important for remote sensing applications in diverse ares, such as nuclear nonproliferation, environmental monitoring, geophysics, and planetary science. We demonstrate isotopically sensitive real-time standoff detection of uranium by the use of femtosecond filament-induced laser ablation molecular isotopic spectrometry. A uranium oxide molecular emission isotope shift of 0.05 ± 0.007 nm is reported at 593.6 nm. We implement both spectroscopic and acoustic diagnostics to characterize the properties of uranium plasma generated at different filament- uranium interaction points. The resulting uranium oxide emission exhibitsmore » a nearly constant signal-to-background ratio over the length of the filament, unlike the uranium atomic and ionic emission, for which the signal-to-background ratio varies significantly along the filament propagation. This is explained by the different rates of increase of plasma density and uranium oxide density along the filament length resulting from spectral and temporal evolution of the filament along its propagation. Lastly, the results provide a basis for the optimal use of filaments for standoff detection and analysis of uranium isotopes and indicate the potential of the technique for a wider range of remote sensing applications that require isotopic sensitivity.« less
Hartig, Kyle C.; Ghebregziabher, Isaac; Jovanovic, Igor
2017-01-01
The ability to perform not only elementally but also isotopically sensitive detection and analysis at standoff distances is impor-tant for remote sensing applications in diverse ares, such as nuclear nonproliferation, environmental monitoring, geophysics, and planetary science. We demonstrate isotopically sensitive real-time standoff detection of uranium by the use of femtosecond filament-induced laser ablation molecular isotopic spectrometry. A uranium oxide molecular emission isotope shift of 0.05 ± 0.007 nm is reported at 593.6 nm. We implement both spectroscopic and acoustic diagnostics to characterize the properties of uranium plasma generated at different filament-uranium interaction points. The resulting uranium oxide emis-sion exhibits a nearly constant signal-to-background ratio over the length of the filament, unlike the uranium atomic and ionic emission, for which the signal-to-background ratio varies significantly along the filament propagation. This is explained by the different rates of increase of plasma density and uranium oxide density along the filament length resulting from spectral and temporal evolution of the filament along its propagation. The results provide a basis for the optimal use of filaments for standoff detection and analysis of uranium isotopes and indicate the potential of the technique for a wider range of remote sensing applications that require isotopic sensitivity. PMID:28272450
Hartig, Kyle C.; Ghebregziabher, Isaac; Jovanovic, Igor
2017-03-08
The ability to perform not only elementally but also isotopically sensitive detection and analysis at standoff distances is important for remote sensing applications in diverse ares, such as nuclear nonproliferation, environmental monitoring, geophysics, and planetary science. We demonstrate isotopically sensitive real-time standoff detection of uranium by the use of femtosecond filament-induced laser ablation molecular isotopic spectrometry. A uranium oxide molecular emission isotope shift of 0.05 ± 0.007 nm is reported at 593.6 nm. We implement both spectroscopic and acoustic diagnostics to characterize the properties of uranium plasma generated at different filament- uranium interaction points. The resulting uranium oxide emission exhibitsmore » a nearly constant signal-to-background ratio over the length of the filament, unlike the uranium atomic and ionic emission, for which the signal-to-background ratio varies significantly along the filament propagation. This is explained by the different rates of increase of plasma density and uranium oxide density along the filament length resulting from spectral and temporal evolution of the filament along its propagation. Lastly, the results provide a basis for the optimal use of filaments for standoff detection and analysis of uranium isotopes and indicate the potential of the technique for a wider range of remote sensing applications that require isotopic sensitivity.« less
NASA Astrophysics Data System (ADS)
Hartig, Kyle C.; Ghebregziabher, Isaac; Jovanovic, Igor
2017-03-01
The ability to perform not only elementally but also isotopically sensitive detection and analysis at standoff distances is impor-tant for remote sensing applications in diverse ares, such as nuclear nonproliferation, environmental monitoring, geophysics, and planetary science. We demonstrate isotopically sensitive real-time standoff detection of uranium by the use of femtosecond filament-induced laser ablation molecular isotopic spectrometry. A uranium oxide molecular emission isotope shift of 0.05 ± 0.007 nm is reported at 593.6 nm. We implement both spectroscopic and acoustic diagnostics to characterize the properties of uranium plasma generated at different filament-uranium interaction points. The resulting uranium oxide emis-sion exhibits a nearly constant signal-to-background ratio over the length of the filament, unlike the uranium atomic and ionic emission, for which the signal-to-background ratio varies significantly along the filament propagation. This is explained by the different rates of increase of plasma density and uranium oxide density along the filament length resulting from spectral and temporal evolution of the filament along its propagation. The results provide a basis for the optimal use of filaments for standoff detection and analysis of uranium isotopes and indicate the potential of the technique for a wider range of remote sensing applications that require isotopic sensitivity.
Simulated fissioning of uranium and testing of the fission-track dating method
McGee, V.E.; Johnson, N.M.; Naeser, C.W.
1985-01-01
A computer program (FTD-SIM) faithfully simulates the fissioning of 238U with time and 235U with neutron dose. The simulation is based on first principles of physics where the fissioning of 238U with the flux of time is described by Ns = ??f 238Ut and the fissioning of 235U with the fluence of neutrons is described by Ni = ??235U??. The Poisson law is used to set the stochastic variation of fissioning within the uranium population. The life history of a given crystal can thus be traced under an infinite variety of age and irradiation conditions. A single dating attempt or up to 500 dating attempts on a given crystal population can be simulated by specifying the age of the crystal population, the size and variation in the areas to be counted, the amount and distribution of uranium, the neutron dose to be used and its variation, and the desired ratio of 238U to 235U. A variety of probability distributions can be applied to uranium and counting-area. The Price and Walker age equation is used to estimate age. The output of FTD-SIM includes the tabulated results of each individual dating attempt (sample) on demand and/or the summary statistics and histograms for multiple dating attempts (samples) including the sampling age. An analysis of the results from FTD-SIM shows that: (1) The external detector method is intrinsically more precise than the population method. (2) For the external detector method a correlation between spontaneous track count, Ns, and induced track count, Ni, results when the population of grains has a stochastic uranium content and/or when the counting areas between grains are stochastic. For the population method no correlation can exist. (3) In the external detector method the sampling distribution of age is independent of the number of grains counted. In the population method the sampling distribution of age is highly dependent on the number of grains counted. (4) Grains with zero-track counts, either in Ns or Ni, are in integral part of fissioning theory and under certain circumstances must be included in any estimate of age. (5) In estimating standard error of age the standard error of Ns and Ni and ?? must be accurately estimated and propagated through the age equation. Several statistical models are presently available to do so. ?? 1985.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
Error Propagation in a System Model
NASA Technical Reports Server (NTRS)
Schloegel, Kirk (Inventor); Bhatt, Devesh (Inventor); Oglesby, David V. (Inventor); Madl, Gabor (Inventor)
2015-01-01
Embodiments of the present subject matter can enable the analysis of signal value errors for system models. In an example, signal value errors can be propagated through the functional blocks of a system model to analyze possible effects as the signal value errors impact incident functional blocks. This propagation of the errors can be applicable to many models of computation including avionics models, synchronous data flow, and Kahn process networks.
Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd
2016-08-01
Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, Michael K.; O'Rourke, Patrick E.
An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.
Uncertainty Propagation in an Ecosystem Nutrient Budget.
New aspects and advancements in classical uncertainty propagation methods were used to develop a nutrient budget with associated error for a northern Gulf of Mexico coastal embayment. Uncertainty was calculated for budget terms by propagating the standard error and degrees of fr...
USSR and Eastern Europe Scientific Abstracts. Physics and Mathematics, Number 31
1976-12-30
recorded by the method of photon counting . Based on the resultant, the optimal experimental conditions can be judged for investigation of the propagation...zero-power thermal heavy-water reactor with glazed ceramic fuel elements of honeycomb type with natural uranium . By examining the variation in radius R...ultracold neutron registration of 50 and 25% respectively. The radiator in the detectors is a uranium -titanium layer. Both detectors are practically
Magnani, N; Caciuffo, R; Lander, G H; Hiess, A; Regnault, L-P
2010-03-24
The anisotropy of magnetic fluctuations propagating along the [1 1 0] direction in the ordered phase of uranium antimonide has been studied using polarized inelastic neutron scattering. The observed polarization behavior of the spin waves is a natural consequence of the longitudinal 3-k magnetic structure; together with recent results on the 3-k-transverse uranium dioxide, these findings establish this technique as an important tool to study complex magnetic arrangements. Selected details of the magnon excitation spectra of USb have also been reinvestigated, indicating the need to revise the currently accepted theoretical picture for this material.
Environmental Survey of the B-3 and Ford’s Farm Ranges,
1983-08-01
reported have an estimated analytical error of *35% unless noted otherwise. 14 Isotopic Analysis The isotopic uranium analysis procedure used by UST...sulfate buffer and elec- trodeposited on a stainless steel disc, and isotopes of uranium (234U, 23 5U, and 2 38U) were determined by pulse height analysis ...measurements and some environmental sampling. Several special studies were also conducted, including analyses of the isotopic composition of uranium in
Lead and uranium group abundances in cosmic rays
NASA Technical Reports Server (NTRS)
Yadav, J. S.; Perelygin, V. P.
1985-01-01
The importance of Lead and Uranium group abundances in cosmic rays is discussed in understanding their evolution and propagation. The electronic detectors can provide good charge resolution but poor data statistics. The plastic detectors can provide somewhat better statistics but charge resolution deteriorates. The extraterrestrial crystals can provide good statistics but with poor charge resolution. Recent studies of extraterrestrial crystals regarding their calibration to accelerated uranium ion beam and track etch kinetics are discussed. It is hoped that a charge resolution of two charge units can be achieved provided an additional parameter is taken into account. The prospects to study abundances of Lead group, Uranium group and superheavy element in extraterrestrial crystals are discussed, and usefulness of these studies in the light of studies with electronic and plastic detectors is assessed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Simulation of wave propagation in three-dimensional random media
NASA Astrophysics Data System (ADS)
Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1995-04-01
Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
Scellier, Benjamin; Bengio, Yoshua
2017-01-01
We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task. PMID:28522969
NASA Technical Reports Server (NTRS)
Lahti, G. P.; Mueller, R. A.
1973-01-01
Measurements of MeV neutron were made at the surface of a lithium hydride and depleted uranium shielded reactor. Four shield configurations were considered: these were assembled progressively with cylindrical shells of 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, 13-centimeter-thick lithium hydride, 5-centimeter-thick depleted uranium, and 3-centimeter-thick depleted uranium. Measurements were made with a NE-218 scintillation spectrometer; proton pulse height distributions were differentiated to obtain neutron spectra. Calculations were made using the two-dimensional discrete ordinates code DOT and ENDF/B (version 3) cross sections. Good agreement between measured and calculated spectral shape was observed. Absolute measured and calculated fluxes were within 50 percent of one another; observed discrepancies in absolute flux may be due to cross section errors.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
An automated workflow for patient-specific quality control of contour propagation
NASA Astrophysics Data System (ADS)
Beasley, William J.; McWilliam, Alan; Slevin, Nicholas J.; Mackay, Ranald I.; van Herk, Marcel
2016-12-01
Contour propagation is an essential component of adaptive radiotherapy, but current contour propagation algorithms are not yet sufficiently accurate to be used without manual supervision. Manual review of propagated contours is time-consuming, making routine implementation of real-time adaptive radiotherapy unrealistic. Automated methods of monitoring the performance of contour propagation algorithms are therefore required. We have developed an automated workflow for patient-specific quality control of contour propagation and validated it on a cohort of head and neck patients, on which parotids were outlined by two observers. Two types of error were simulated—mislabelling of contours and introducing noise in the scans before propagation. The ability of the workflow to correctly predict the occurrence of errors was tested, taking both sets of observer contours as ground truth, using receiver operator characteristic analysis. The area under the curve was 0.90 and 0.85 for the observers, indicating good ability to predict the occurrence of errors. This tool could potentially be used to identify propagated contours that are likely to be incorrect, acting as a flag for manual review of these contours. This would make contour propagation more efficient, facilitating the routine implementation of adaptive radiotherapy.
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
[Can the scattering of differences from the target refraction be avoided?].
Janknecht, P
2008-10-01
We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.
Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...
2015-12-10
We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
The Propagation of Errors in Experimental Data Analysis: A Comparison of Pre-and Post-Test Designs
ERIC Educational Resources Information Center
Gorard, Stephen
2013-01-01
Experimental designs involving the randomization of cases to treatment and control groups are powerful and under-used in many areas of social science and social policy. This paper reminds readers of the pre-and post-test, and the post-test only, designs, before explaining briefly how measurement errors propagate according to error theory. The…
Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.
2009-01-01
The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lombardi, Marcie L.
2012-03-01
Safeguarding uranium enrichment facilities is a serious concern for the International Atomic Energy Agency (IAEA). Safeguards methods have changed over the years, most recently switching to an improved safeguards model that calls for new technologies to help keep up with the increasing size and complexity of today’s gas centrifuge enrichment plants (GCEPs). One of the primary goals of the IAEA is to detect the production of uranium at levels greater than those an enrichment facility may have declared. In order to accomplish this goal, new enrichment monitors need to be as accurate as possible. This dissertation will look at themore » Advanced Enrichment Monitor (AEM), a new enrichment monitor designed at Los Alamos National Laboratory. Specifically explored are various factors that could potentially contribute to errors in a final enrichment determination delivered by the AEM. There are many factors that can cause errors in the determination of uranium hexafluoride (UF{sub 6}) gas enrichment, especially during the period when the enrichment is being measured in an operating GCEP. To measure enrichment using the AEM, a passive 186-keV (kiloelectronvolt) measurement is used to determine the {sup 235}U content in the gas, and a transmission measurement or a gas pressure reading is used to determine the total uranium content. A transmission spectrum is generated using an x-ray tube and a “notch” filter. In this dissertation, changes that could occur in the detection efficiency and the transmission errors that could result from variations in pipe-wall thickness will be explored. Additional factors that could contribute to errors in enrichment measurement will also be examined, including changes in the gas pressure, ambient and UF{sub 6} temperature, instrumental errors, and the effects of uranium deposits on the inside of the pipe walls will be considered. The sensitivity of the enrichment calculation to these various parameters will then be evaluated. Previously, UF{sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.« less
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
Stoliker, Deborah L.; Kent, Douglas B.; Zachara, John M.
2011-01-01
Uranium adsorption-desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500-1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO22+ + 2CO32- = >SOUO2(CO3HCO3)2-, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logKc) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logKc values. Using this approach, logKc values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (Kc uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors.
The first Australian gravimetric quasigeoid model with location-specific uncertainty estimates
NASA Astrophysics Data System (ADS)
Featherstone, W. E.; McCubbine, J. C.; Brown, N. J.; Claessens, S. J.; Filmer, M. S.; Kirby, J. F.
2018-02-01
We describe the computation of the first Australian quasigeoid model to include error estimates as a function of location that have been propagated from uncertainties in the EGM2008 global model, land and altimeter-derived gravity anomalies and terrain corrections. The model has been extended to include Australia's offshore territories and maritime boundaries using newer datasets comprising an additional {˜ }280,000 land gravity observations, a newer altimeter-derived marine gravity anomaly grid, and terrain corrections at 1^' ' }× 1^' ' } resolution. The error propagation uses a remove-restore approach, where the EGM2008 quasigeoid and gravity anomaly error grids are augmented by errors propagated through a modified Stokes integral from the errors in the altimeter gravity anomalies, land gravity observations and terrain corrections. The gravimetric quasigeoid errors (one sigma) are 50-60 mm across most of the Australian landmass, increasing to {˜ }100 mm in regions of steep horizontal gravity gradients or the mountains, and are commensurate with external estimates.
Constrained motion estimation-based error resilient coding for HEVC
NASA Astrophysics Data System (ADS)
Guo, Weihan; Zhang, Yongfei; Li, Bo
2018-04-01
Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.
Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications
NASA Technical Reports Server (NTRS)
Welch, Bryan W.; Connolly, Joseph W.
2006-01-01
The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.
Establishing the traceability of a uranyl nitrate solution to a standard reference material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, C.H.; Clark, J.P.
1978-01-01
A uranyl nitrate solution for use as a Working Calibration and Test Material (WCTM) was characterized, using a statistically designed procedure to document traceability to National Bureau of Standards Reference Material (SPM-960). A Reference Calibration and Test Material (PCTM) was prepared from SRM-960 uranium metal to approximate the acid and uranium concentration of the WCTM. This solution was used in the characterization procedure. Details of preparing, handling, and packaging these solutions are covered. Two outside laboratories, each having measurement expertise using a different analytical method, were selected to measure both solutions according to the procedure for characterizing the WCTM. Twomore » different methods were also used for the in-house characterization work. All analytical results were tested for statistical agreement before the WCTM concentration and limit of error values were calculated. A concentration value was determined with a relative limit of error (RLE) of approximately 0.03% which was better than the target RLE of 0.08%. The use of this working material eliminates the expense of using SRMs to fulfill traceability requirements for uranium measurements on this type material. Several years' supply of uranyl nitrate solution with NBS traceability was produced. The cost of this material was less than 10% of an equal quantity of SRM-960 uranium metal.« less
An introduction of component fusion extend Kalman filtering method
NASA Astrophysics Data System (ADS)
Geng, Yue; Lei, Xusheng
2018-05-01
In this paper, the Component Fusion Extend Kalman Filtering (CFEKF) algorithm is proposed. Assuming each component of error propagation are independent with Gaussian distribution. The CFEKF can be obtained through the maximum likelihood of propagation error, which can adjust the state transition matrix and the measured matrix adaptively. With minimize linearization error, CFEKF can an effectively improve the estimation accuracy of nonlinear system state. The computation of CFEKF is similar to EKF which is easy for application.
An experimental study of fault propagation in a jet-engine controller. M.S. Thesis
NASA Technical Reports Server (NTRS)
Choi, Gwan Seung
1990-01-01
An experimental analysis of the impact of transient faults on a microprocessor-based jet engine controller, used in the Boeing 747 and 757 aircrafts is described. A hierarchical simulation environment which allows the injection of transients during run-time and the tracing of their impact is described. Verification of the accuracy of this approach is also provided. A determination of the probability that a transient results in latch, pin or functional errors is made. Given a transient fault, there is approximately an 80 percent chance that there is no impact on the chip. An empirical model to depict the process of error exploration and degeneration in the target system is derived. The model shows that, if no latch errors occur within eight clock cycles, no significant damage is likely to happen. Thus, the overall impact of a transient is well contained. A state transition model is also derived from the measured data, to describe the error propagation characteristics within the chip, and to quantify the impact of transients on the external environment. The model is used to identify and isolate the critical fault propagation paths, the module most sensitive to fault propagation and the module with the highest potential of causing external pin errors.
Numerical ‘health check’ for scientific codes: the CADNA approach
NASA Astrophysics Data System (ADS)
Scott, N. S.; Jézéquel, F.; Denis, C.; Chesneaux, J.-M.
2007-04-01
Scientific computation has unavoidable approximations built into its very fabric. One important source of error that is difficult to detect and control is round-off error propagation which originates from the use of finite precision arithmetic. We propose that there is a need to perform regular numerical 'health checks' on scientific codes in order to detect the cancerous effect of round-off error propagation. This is particularly important in scientific codes that are built on legacy software. We advocate the use of the CADNA library as a suitable numerical screening tool. We present a case study to illustrate the practical use of CADNA in scientific codes that are of interest to the Computer Physics Communications readership. In doing so we hope to stimulate a greater awareness of round-off error propagation and present a practical means by which it can be analyzed and managed.
Uncertainty Propagation in OMFIT
NASA Astrophysics Data System (ADS)
Smith, Sterling; Meneghini, Orso; Sung, Choongki
2017-10-01
A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.
Amiralizadeh, Siamak; Nguyen, An T; Rusch, Leslie A
2013-08-26
We investigate the performance of digital filter back-propagation (DFBP) using coarse parameter estimation for mitigating SOA nonlinearity in coherent communication systems. We introduce a simple, low overhead method for parameter estimation for DFBP based on error vector magnitude (EVM) as a figure of merit. The bit error rate (BER) penalty achieved with this method has negligible penalty as compared to DFBP with fine parameter estimation. We examine different bias currents for two commercial SOAs used as booster amplifiers in our experiments to find optimum operating points and experimentally validate our method. The coarse parameter DFBP efficiently compensates SOA-induced nonlinearity for both SOA types in 80 km propagation of 16-QAM signal at 22 Gbaud.
A wide-angle high Mach number modal expansion for infrasound propagation.
Assink, Jelle; Waxler, Roger; Velea, Doru
2017-03-01
The use of modal expansions to solve the problem of atmospheric infrasound propagation is revisited. A different form of the associated modal equation is introduced, valid for wide-angle propagation in atmospheres with high Mach number flow. The modal equation can be formulated as a quadratic eigenvalue problem for which there are simple and efficient numerical implementations. A perturbation expansion for the treatment of attenuation, valid for stratified media with background flow, is derived as well. Comparisons are carried out between the proposed algorithm and a modal algorithm assuming an effective sound speed, including a real data case study. The comparisons show that the effective sound speed approximation overestimates the effect of horizontal wind on sound propagation, leading to errors in traveltime, propagation path, trace velocity, and absorption. The error is found to be dependent on propagation angle and Mach number.
Error propagation in eigenimage filtering.
Soltanian-Zadeh, H; Windham, J P; Jenkins, J M
1990-01-01
Mathematical derivation of error (noise) propagation in eigenimage filtering is presented. Based on the mathematical expressions, a method for decreasing the propagated noise given a sequence of images is suggested. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the final composite image are compared to the SNRs and CNRs of the images in the sequence. The consistency of the assumptions and accuracy of the mathematical expressions are investigated using sequences of simulated and real magnetic resonance (MR) images of an agarose phantom and a human brain.
Thermal properties of nonstoichiometry uranium dioxide
NASA Astrophysics Data System (ADS)
Kavazauri, R.; Pokrovskiy, S. A.; Baranov, V. G.; Tenishev, A. V.
2016-04-01
In this paper, was developed a method of oxidation pure uranium dioxide to a predetermined deviation from the stoichiometry. Oxidation was carried out using the thermogravimetric method on NETZSCH STA 409 CD with a solid electrolyte galvanic cell for controlling the oxygen potential of the environment. 4 samples uranium oxide were obtained with a different ratio of oxygen-to-metal: O / U = 2.002, O / U = 2.005, O / U = 2.015, O / U = 2.033. For the obtained samples were determined basic thermal characteristics of the heat capacity, thermal diffusivity, thermal conductivity. The error of heat capacity determination is equal to 5%. Thermal diffusivity and thermal conductivity of the samples decreased with increasing deviation from stoichiometry. For the sample with O / M = 2.033, difference of both values with those of stoichiometric uranium dioxide is close to 50%.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
2011-01-01
Uranium adsorption–desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500–1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO22+ + 2CO32- = >SOUO2(CO3HCO3)2–, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logKc) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logKc values. Using this approach, logKc values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (<0.063 mm) of another could be demonstrated despite the fines requiring a different reaction stoichiometry. Estimates of logKc uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors. PMID:21923109
Stoliker, Deborah L; Kent, Douglas B; Zachara, John M
2011-10-15
Uranium adsorption-desorption on sediment samples collected from the Hanford 300-Area, Richland, WA varied extensively over a range of field-relevant chemical conditions, complicating assessment of possible differences in equilibrium adsorption properties. Adsorption equilibrium was achieved in 500-1000 h although dissolved uranium concentrations increased over thousands of hours owing to changes in aqueous chemical composition driven by sediment-water reactions. A nonelectrostatic surface complexation reaction, >SOH + UO₂²⁺ + 2CO₃²⁻ = >SOUO₂(CO₃HCO₃)²⁻, provided the best fit to experimental data for each sediment sample resulting in a range of conditional equilibrium constants (logK(c)) from 21.49 to 21.76. Potential differences in uranium adsorption properties could be assessed in plots based on the generalized mass-action expressions yielding linear trends displaced vertically by differences in logK(c) values. Using this approach, logK(c) values for seven sediment samples were not significantly different. However, a significant difference in adsorption properties between one sediment sample and the fines (< 0.063 mm) of another could be demonstrated despite the fines requiring a different reaction stoichiometry. Estimates of logK(c) uncertainty were improved by capturing all data points within experimental errors. The mass-action expression plots demonstrate that applying models outside the range of conditions used in model calibration greatly increases potential errors.
A practical method of estimating standard error of age in the fission track dating method
Johnson, N.M.; McGee, V.E.; Naeser, C.W.
1979-01-01
A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.
Error Analysis and Validation for Insar Height Measurement Induced by Slant Range
NASA Astrophysics Data System (ADS)
Zhang, X.; Li, T.; Fan, W.; Geng, X.
2018-04-01
InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2009-01-01
In this paper we show by means of numerical experiments that the error introduced in a numerical domain because of a Perfectly Matched Layer or Damping Layer boundary treatment can be controlled. These experimental demonstrations are for acoustic propagation with the Linearized Euler Equations with both uniform and steady jet flows. The propagating signal is driven by a time harmonic pressure source. Combinations of Perfectly Matched and Damping Layers are used with different damping profiles. These layer and profile combinations allow the relative error introduced by a layer to be kept as small as desired, in principle. Tradeoffs between error and cost are explored.
Running coupling constant from lattice studies of gluon and ghost propagators
NASA Astrophysics Data System (ADS)
Cucchieri, A.; Mendes, T.
2004-12-01
We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.
Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D
2010-12-01
To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.
NASA Astrophysics Data System (ADS)
Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose
2015-09-01
The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.
Spaceborne Differential GPS Applications
2000-02-17
passive vehicle to the rela- tive filter. The Clohessy - Wiltshire equations are used for state and error propagation. This filter has been designed using...such as the satellite clock er- ror. Furthermore, directly estimating a relative state allows the use of the Clohessy - Wiltshire (CW) equa- tions...allows the use of the Clohessy - Wiltshire (CW) equations for state and error propagation. In fact, in its current form the relative filter requires no
Estimation and Simulation of Slow Crack Growth Parameters from Constant Stress Rate Data
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Weaver, Aaron S.
2003-01-01
Closed form, approximate functions for estimating the variances and degrees-of-freedom associated with the slow crack growth parameters n, D, B, and A(sup *) as measured using constant stress rate ('dynamic fatigue') testing were derived by using propagation of errors. Estimates made with the resulting functions and slow crack growth data for a sapphire window were compared to the results of Monte Carlo simulations. The functions for estimation of the variances of the parameters were derived both with and without logarithmic transformation of the initial slow crack growth equations. The transformation was performed to make the functions both more linear and more normal. Comparison of the Monte Carlo results and the closed form expressions derived with propagation of errors indicated that linearization is not required for good estimates of the variances of parameters n and D by the propagation of errors method. However, good estimates variances of the parameters B and A(sup *) could only be made when the starting slow crack growth equation was transformed and the coefficients of variation of the input parameters were not too large. This was partially a result of the skewered distributions of B and A(sup *). Parametric variation of the input parameters was used to determine an acceptable range for using closed form approximate equations derived from propagation of errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S; Chao, C; Columbia University, NY, NY
2014-06-01
Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less
Mironov, Vladislav P; Matusevich, Janna L; Kudrjashov, Vladimir P; Boulyga, Sergei F; Becker, J Sabine
2002-12-01
This work presents experimental results on the distribution of irradiated reactor uranium from fallout after the accident at Chernobyl Nuclear Power Plant (NPP) in comparison to natural uranium distribution in different soil types. Oxidation processes and vertical migration of irradiated uranium in soils typical of the 30 km relocation area around Chernobyl NPP were studied using 236U as the tracer for irradiated reactor uranium and inductively coupled plasma mass spectrometry as the analytical method for uranium isotope ratio measurements. Measurements of natural uranium yielded significant variations of its concentration in upper soil layers from 2 x 10(-7) g g(-1) to 3.4 x 10(-6) g g(-1). Concentrations of irradiated uranium in the upper 0-10 cm soil layers at the investigated sampling sites varied from 5 x 10(-12) g g(-1) to 2 x 10(-6) g g(-1) depending on the distance from Chernobyl NPP. In the majority of investigated soil profiles 78% to 97% of irradiated "Chernobyl" uranium is still contained in the upper 0-10 cm soil layers. The physical and chemical characteristics of the soil do not have any significant influence on processes of fuel particle destruction. Results obtained using carbonate leaching of 236U confirmed that more than 60% of irradiated "Chernobyl" uranium is still in a tetravalent form, ie. it is included in the fuel matrix (non-oxidized fuel UO2). The average value of the destruction rate of fuel particles determined for the Western radioactive trace (k = 0.030 +/- 0.005 yr(-1)) and for the Northern radioactive trace (k = 0.035 + 0.009 yr(-1)) coincide within experimental errors. Use of leaching of fission products in comparison to leaching of uranium for study of the destruction rate of fuel particles yielded poor coincidence due to the fact that use of fission products does not take into account differences in the chemical properties of fission products and fuel matrix (uranium).
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
NASA Astrophysics Data System (ADS)
Kung, Wei-Ying; Kim, Chang-Su; Kuo, C.-C. Jay
2004-10-01
A multi-hypothesis motion compensated prediction (MHMCP) scheme, which predicts a block from a weighted superposition of more than one reference blocks in the frame buffer, is proposed and analyzed for error resilient visual communication in this research. By combining these reference blocks effectively, MHMCP can enhance the error resilient capability of compressed video as well as achieve a coding gain. In particular, we investigate the error propagation effect in the MHMCP coder and analyze the rate-distortion performance in terms of the hypothesis number and hypothesis coefficients. It is shown that MHMCP suppresses the short-term effect of error propagation more effectively than the intra refreshing scheme. Simulation results are given to confirm the analysis. Finally, several design principles for the MHMCP coder are derived based on the analytical and experimental results.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
The importance of robust error control in data compression applications
NASA Technical Reports Server (NTRS)
Woolley, S. I.
1993-01-01
Data compression has become an increasingly popular option as advances in information technology have placed further demands on data storage capabilities. With compression ratios as high as 100:1 the benefits are clear; however, the inherent intolerance of many compression formats to error events should be given careful consideration. If we consider that efficiently compressed data will ideally contain no redundancy, then the introduction of a channel error must result in a change of understanding from that of the original source. While the prefix property of codes such as Huffman enables resynchronisation, this is not sufficient to arrest propagating errors in an adaptive environment. Arithmetic, Lempel-Ziv, discrete cosine transform (DCT) and fractal methods are similarly prone to error propagating behaviors. It is, therefore, essential that compression implementations provide sufficient combatant error control in order to maintain data integrity. Ideally, this control should be derived from a full understanding of the prevailing error mechanisms and their interaction with both the system configuration and the compression schemes in use.
Propagation of coherent light pulses with PHASE
NASA Astrophysics Data System (ADS)
Bahrdt, J.; Flechsig, U.; Grizzoli, W.; Siewert, F.
2014-09-01
The current status of the software package PHASE for the propagation of coherent light pulses along a synchrotron radiation beamline is presented. PHASE is based on an asymptotic expansion of the Fresnel-Kirchhoff integral (stationary phase approximation) which is usually truncated at the 2nd order. The limits of this approximation as well as possible extensions to higher orders are discussed. The accuracy is benchmarked against a direct integration of the Fresnel-Kirchhoff integral. Long range slope errors of optical elements can be included by means of 8th order polynomials in the optical element coordinates w and l. Only recently, a method for the description of short range slope errors has been implemented. The accuracy of this method is evaluated and examples for realistic slope errors are given. PHASE can be run either from a built-in graphical user interface or from any script language. The latter method provides substantial flexibility. Optical elements including apertures can be combined. Complete wave packages can be propagated, as well. Fourier propagators are included in the package, thus, the user may choose between a variety of propagators. Several means to speed up the computation time were tested - among them are the parallelization in a multi core environment and the parallelization on a cluster.
Sittig, D. F.; Orr, J. A.
1991-01-01
Various methods have been proposed in an attempt to solve problems in artifact and/or alarm identification including expert systems, statistical signal processing techniques, and artificial neural networks (ANN). ANNs consist of a large number of simple processing units connected by weighted links. To develop truly robust ANNs, investigators are required to train their networks on huge training data sets, requiring enormous computing power. We implemented a parallel version of the backward error propagation neural network training algorithm in the widely portable parallel programming language C-Linda. A maximum speedup of 4.06 was obtained with six processors. This speedup represents a reduction in total run-time from approximately 6.4 hours to 1.5 hours. We conclude that use of the master-worker model of parallel computation is an excellent method for obtaining speedups in the backward error propagation neural network training algorithm. PMID:1807607
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Plume propagation direction determination with SO2 cameras
NASA Astrophysics Data System (ADS)
Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich
2017-03-01
SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.
Determination of uranium in natural waters
Barker, Franklin Butt; Johnson, J.O.; Edwards, K.W.; Robinson, B.P.
1965-01-01
A method is described for the determination of very low concentrations of uranium in water. The method is based on the fluorescence of uranium in a pad prepared by fusion of the dried solids from the water sample with a flux of 10 percent NaF 45.5 percent Na2CO3 , and 45.5 percent K2CO3 . This flux permits use of a low fusion temperature and yields pads which are easily removed from the platinum fusion dishes for fluorescence measurements. Uranium concentrations of less than 1 microgram per liter can be determined on a sample of 10 milliliters, or less. The sensitivity and accuracy of the method are dependent primarily on the purity of reagents used, the stability and linearity of the fluorimeter, and the concentration of quenching elements in the water residue. A purification step is recommended when the fluorescence is quenched by more than 30 percent. Equations are given for the calculation of standard deviations of analyses by this method. Graphs of error functions and representative data are also included.
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude
NASA Technical Reports Server (NTRS)
Sedlak, J.
1994-01-01
Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.
Hoffmann, Sabine; Rage, Estelle; Laurier, Dominique; Laroche, Pierre; Guihenneuc, Chantal; Ancelet, Sophie
2017-02-01
Many occupational cohort studies on underground miners have demonstrated that radon exposure is associated with an increased risk of lung cancer mortality. However, despite the deleterious consequences of exposure measurement error on statistical inference, these analyses traditionally do not account for exposure uncertainty. This might be due to the challenging nature of measurement error resulting from imperfect surrogate measures of radon exposure. Indeed, we are typically faced with exposure uncertainty in a time-varying exposure variable where both the type and the magnitude of error may depend on period of exposure. To address the challenge of accounting for multiplicative and heteroscedastic measurement error that may be of Berkson or classical nature, depending on the year of exposure, we opted for a Bayesian structural approach, which is arguably the most flexible method to account for uncertainty in exposure assessment. We assessed the association between occupational radon exposure and lung cancer mortality in the French cohort of uranium miners and found the impact of uncorrelated multiplicative measurement error to be of marginal importance. However, our findings indicate that the retrospective nature of exposure assessment that occurred in the earliest years of mining of this cohort as well as many other cohorts of underground miners might lead to an attenuation of the exposure-risk relationship. More research is needed to address further uncertainties in the calculation of lung dose, since this step will likely introduce important sources of shared uncertainty.
Radiometric analysis of the longwave infrared channel of the Thematic Mapper on LANDSAT 4 and 5
NASA Technical Reports Server (NTRS)
Schott, John R.; Volchok, William J.; Biegel, Joseph D.
1986-01-01
The first objective was to evaluate the postlaunch radiometric calibration of the LANDSAT Thematic Mapper (TM) band 6 data. The second objective was to determine to what extent surface temperatures could be computed from the TM and 6 data using atmospheric propagation models. To accomplish this, ground truth data were compared to a single TM-4 band 6 data set. This comparison indicated satisfactory agreement over a narrow temperature range. The atmospheric propagation model (modified LOWTRAN 5A) was used to predict surface temperature values based on the radiance at the spacecraft. The aircraft data were calibrated using a multi-altitude profile calibration technique which had been extensively tested in previous studies. This aircraft calibration permitted measurement of surface temperatures based on the radiance reaching the aircraft. When these temperature values are evaluated, an error in the satellite's ability to predict surface temperatures can be estimated. This study indicated that by carefully accounting for various sensor calibration and atmospheric propagation effects, and expected error (1 standard deviation) in surface temperature would be 0.9 K. This assumes no error in surface emissivity and no sampling error due to target location. These results indicate that the satellite calibration is within nominal limits to within this study's ability to measure error.
TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS
Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...
SR-XFA of uranium-containing materials. A case of Bazhenov formation rocks exploration
NASA Astrophysics Data System (ADS)
Phedorin, M. A.; Bobrov, V. A.; Tchebykin, Ye. P.; Melgunov, M. S.
2000-06-01
When an X-ray fluorescent analysis (XFA) is carried out, errors are possible because fluorescent K-lines of "light" elements and L-lines of some "dark" elements can overlap in energy domain. With certain contents of these elements and insufficient resolution of the spectrometer, this leads to considerable errors of determination. An example is the overlapping of a large number of uranium (U) L-lines and Rb, Nb, Mo K-lines. In this paper a procedure is suggested to correct such overlapping. It was tested on uranium-containing rock samples. These samples represent the oil-producing Bazhenov rock formation, which is characterized by organic matter accumulated in abundance and accompanied by "organophile" elements, including U. The procedure is based on scanning the energy of initial exciting X-radiation. This may be regarded advisable only in the XFA versions that use synchrotron radiation — SR-XFA. As a result of this investigation, geochemical characteristics of the Bazhenov formation rocks are demonstrated and the efficiency of energy scanning procedure in determining both Rb, Nb, Mo and U contents is revealed (using comparison with other methods). The energy scanning procedure also works in the presence of L-lines of some other dark elements (Pb, Th, etc.) in the energy domain of K-lines of As-Mo.
Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold
2007-04-15
This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.
NASA Astrophysics Data System (ADS)
Pichardo, Samuel; Moreno-Hernández, Carlos; Drainville, Robert Andrew; Sin, Vivian; Curiel, Laura; Hynynen, Kullervo
2017-09-01
A better understanding of ultrasound transmission through the human skull is fundamental to develop optimal imaging and therapeutic applications. In this study, we present global attenuation values and functions that correlate apparent density calculated from computed tomography scans to shear speed of sound. For this purpose, we used a model for sound propagation based on the viscoelastic wave equation (VWE) assuming isotropic conditions. The model was validated using a series of measurements with plates of different plastic materials and angles of incidence of 0°, 15° and 50°. The optimal functions for transcranial ultrasound propagation were established using the VWE, scan measurements of transcranial propagation with an angle of incidence of 40° and a genetic optimization algorithm. Ten (10) locations over three (3) skulls were used for ultrasound frequencies of 270 kHz and 836 kHz. Results with plastic materials demonstrated that the viscoelastic modeling predicted both longitudinal and shear propagation with an average (±s.d.) error of 9(±7)% of the wavelength in the predicted delay and an error of 6.7(±5)% in the estimation of transmitted power. Using the new optimal functions of speed of sound and global attenuation for the human skull, the proposed model predicted the transcranial ultrasound transmission for a frequency of 270 kHz with an expected error in the predicted delay of 5(±2.7)% of the wavelength. The sound propagation model predicted accurately the sound propagation regardless of either shear or longitudinal sound transmission dominated. For 836 kHz, the model predicted accurately in average with an error in the predicted delay of 17(±16)% of the wavelength. Results indicated the importance of the specificity of the information at a voxel level to better understand ultrasound transmission through the skull. These results and new model will be very valuable tools for the future development of transcranial applications of ultrasound therapy and imaging.
NASA Technical Reports Server (NTRS)
Mallinckrodt, A. J.
1977-01-01
Data from an extensive array of collocated instrumentation at the Wallops Island test facility were intercompared in order to (1) determine the practical achievable accuracy limitations of various tropospheric and ionospheric correction techniques; (2) examine the theoretical bases and derivation of improved refraction correction techniques; and (3) estimate internal systematic and random error levels of the various tracking stations. The GEOS 2 satellite was used as the target vehicle. Data were obtained regarding the ionospheric and tropospheric propagation errors, the theoretical and data analysis of which was documented in some 30 separate reports over the last 6 years. An overview of project results is presented.
Continued investigation of potential application of Omega navigation to civil aviation
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1978-01-01
Major attention is given to an analysis of receiver repeatability in measuring OMEGA phase data. Repeatability is defined as the ability of two like receivers which are co-located to achieve the same LOP phase readings. Specific data analysis is presented. A propagation model is described which has been used in the analysis of propagation anomalies. Composite OMEGA analysis is presented in terms of carrier phase correlation analysis and the determination of carrier phase weighting coefficients for minimizing composite phase variation. Differential OMEGA error analysis is presented for receiver separations. Three frequency analysis includes LOP error and position error based on three and four OMEGA transmissions. Results of phase amplitude correlation studies are presented.
Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media
Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.
2009-01-01
Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Auger, Ludovic
2003-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.
NASA Technical Reports Server (NTRS)
Greatorex, Scott (Editor); Beckman, Mark
1996-01-01
Several future, and some current missions, use an on-board computer (OBC) force model that is very limited. The OBC geopotential force model typically includes only the J(2), J(3), J(4), C(2,2) and S(2,2) terms to model non-spherical Earth gravitational effects. The Tropical Rainfall Measuring Mission (TRMM), Wide-field Infrared Explorer (WIRE), Transition Region and Coronal Explorer (TRACE), Submillimeter Wave Astronomy Satellite (SWAS), and X-ray Timing Explorer (XTE) all plan to use this geopotential force model on-board. The Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) is already flying this geopotential force model. Past analysis has shown that one of the leading sources of error in the OBC propagated ephemeris is the omission of the higher order geopotential terms. However, these same analyses have shown a wide range of accuracies for the OBC ephemerides. Analysis was performed using EUVE state vectors that showed the EUVE four day OBC propagated ephemerides varied in accuracy from 200 m. to 45 km. depending on the initial vector used to start the propagation. The vectors used in the study were from a single EUVE orbit at one minute intervals in the ephemeris. Since each vector propagated practically the same path as the others, the differences seen had to be due to differences in the inital state vector only. An algorithm was developed that will optimize the epoch of the uploaded state vector. Proper selection can reduce the previous errors of anywhere from 200 m. to 45 km. to generally less than one km. over four days of propagation. This would enable flight projects to minimize state vector uploads to the spacecraft. Additionally, this method is superior to other methods in that no additional orbit estimates need be done. The definitive ephemeris generated on the ground can be used as long as the proper epoch is chosen. This algorithm can be easily coded in software that would pick the epoch within a specified time range that would minimize the OBC propagation error. This techniques should greatly improve the accuracy of the OBC propagation on-board future spacecraft such as TRMM, WIRE, SWAS, and XTE without increasing complexity in the ground processing.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
Applying Metrological Techniques to Satellite Fundamental Climate Data Records
NASA Astrophysics Data System (ADS)
Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.
2018-02-01
Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.
Land mobile satellite propagation measurements in Japan using ETS-V satellite
NASA Technical Reports Server (NTRS)
Obara, Noriaki; Tanaka, Kenji; Yamamoto, Shin-Ichi; Wakana, Hiromitsu
1993-01-01
Propagation characteristics of land mobile satellite communications channels have been investigated actively in recent years. Information of propagation characteristics associated with multipath fading and shadowing is required to design commercial land mobile satellite communications systems, including protocol and error correction method. CRL (Communications Research Laboratory) has carried out propagation measurements using the Engineering Test Satellite-V (ETS-V) at L band (1.5 GHz) through main roads in Japan by a medium gain antenna with an autotracking capability. This paper presents the propagation statistics obtained in this campaign.
Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors
NASA Technical Reports Server (NTRS)
Boussalis, Dhemetrios; Bayard, David S.
2013-01-01
G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to run on any engineer's desktop computer.
Fourth-order self-energy contribution to the Lamb shift
NASA Astrophysics Data System (ADS)
Mallampalli, S.; Sapirstein, J.
1998-03-01
Two-loop self-energy contributions to the fourth-order Lamb shift of ground-state hydrogenic ions are treated to all orders in Zα by using exact Dirac-Coulomb propagators. A rearrangement of the calculation into four ultraviolet finite parts, the M, P, F, and perturbed orbital (PO) terms, is made. Reference-state singularities present in the M and P terms are shown to cancel. The most computationally intensive part of the calculation, the M term, is evaluated for hydrogenlike uranium and bismuth, the F term is evaluated for a range of Z values, but the P term is left for a future calculation. For hydrogenlike uranium, previous calculations of the PO term give -0.971 eV: the contributions from the M and F terms calculated here sum to -0.325 eV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Kleeck, M.; Chemical Sciences and Engineering Division, Argonne National Laboratory, Argonne, IL 60439; Willit, J.
A monolithic uranium molybdenum alloy clad in zirconium has been proposed as a low enriched uranium (LEU) fuel option for research and test reactors, as part of the Reduced Enrichment for Research and Test Reactors program. Scrap from the fuel's manufacture will contain a significant portion of recoverable LEU. Pyroprocessing has been identified as an option to perform this recovery. A model of a pyroprocessing recovery procedure has been developed to assist in refining the LEU recovery process and designing the facility. Corrosion theory and a two mechanism transport model were implemented on a Mat-Lab platform to perform the modeling.more » In developing this model, improved anodic behavior prediction became necessary since a dense uranium-rich salt film was observed at the anode surface during electrorefining experiments. Experiments were conducted on uranium metal to determine the film's character and the conditions under which it forms. The electro-refiner salt used in all the experiments was eutectic LiCl/KCl containing UCl{sub 3}. The anodic film material was analyzed with ICP-OES to determine its composition. Both cyclic voltammetry and potentiodynamic scans were conducted at operating temperatures between 475 and 575 C. degrees to interrogate the electrochemical behavior of the uranium. The results show that an anodic film was produced on the uranium electrode. The film initially passivated the surface of the uranium on the working electrode. At high over potentials after a trans-passive region, the current observed was nearly equal to the current observed at the initial active level. Analytical results support the presence of K{sub 2}UCl{sub 6} at the uranium surface, within the error of the analytical method.« less
Error analysis in stereo vision for location measurement of 3D point
NASA Astrophysics Data System (ADS)
Li, Yunting; Zhang, Jun; Tian, Jinwen
2015-12-01
Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.
An advanced SEU tolerant latch based on error detection
NASA Astrophysics Data System (ADS)
Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao
2018-05-01
This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).
NASA Technical Reports Server (NTRS)
James, R.; Brownlow, J. D.
1985-01-01
A study is performed under NASA contract to evaluate data from an AN/FPS-16 radar installed for support of flight programs at Dryden Flight Research Facility of NASA Ames Research Center. The purpose of this study is to provide information necessary for improving post-flight data reduction and knowledge of accuracy of derived radar quantities. Tracking data from six flights are analyzed. Noise and bias errors in raw tracking data are determined for each of the flights. A discussion of an altitude bias error during all of the tracking missions is included. This bias error is defined by utilizing pressure altitude measurements made during survey flights. Four separate filtering methods, representative of the most widely used optimal estimation techniques for enhancement of radar tracking data, are analyzed for suitability in processing both real-time and post-mission data. Additional information regarding the radar and its measurements, including typical noise and bias errors in the range and angle measurements, is also presented. This report is in two parts. This is part 2, a discussion of the modeling of propagation path errors.
Using special functions to model the propagation of airborne diseases
NASA Astrophysics Data System (ADS)
Bolaños, Daniela
2014-06-01
Some special functions of the mathematical physics are using to obtain a mathematical model of the propagation of airborne diseases. In particular we study the propagation of tuberculosis in closed rooms and we model the propagation using the error function and the Bessel function. In the model, infected individual emit pathogens to the environment and this infect others individuals who absorb it. The evolution in time of the concentration of pathogens in the environment is computed in terms of error functions. The evolution in time of the number of susceptible individuals is expressed by a differential equation that contains the error function and it is solved numerically for different parametric simulations. The evolution in time of the number of infected individuals is plotted for each numerical simulation. On the other hand, the spatial distribution of the pathogen around the source of infection is represented by the Bessel function K0. The spatial and temporal distribution of the number of infected individuals is computed and plotted for some numerical simulations. All computations were made using software Computer algebra, specifically Maple. It is expected that the analytical results that we obtained allow the design of treatment rooms and ventilation systems that reduce the risk of spread of tuberculosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
NASA Astrophysics Data System (ADS)
Sarkar, B.; Bhunia, C. T.; Maulik, U.
2012-06-01
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
Measurement of the Total Cross Section of Uranium-Uranium Collisions at √{sNN} = 192 . 8 GeV
NASA Astrophysics Data System (ADS)
Baltz, A. J.; Fischer, W.; Blaskiewicz, M.; Gassner, D.; Drees, K. A.; Luo, Y.; Minty, M.; Thieberger, P.; Wilinski, M.; Pshenichnov, I. A.
2014-03-01
The total cross section of Uranium-Uranium at √{sNN} = 192 . 8 GeV has been measured to be 515 +/-13stat +/-22sys barn, which agrees with the calculated theoretical value of 487.3 barn within experimental error. That this total cross section is more than an order of magnitude larger than the geometric ion-ion cross section is primarily due to Bound-Free Pair Production (BFPP) and Electro-Magnetic Dissociation (EMD). Nearly all beam losses were due to geometric, BFPP and EMD collisions. This allowed the determination of the total cross section from the measured beam loss rates and luminosity. The beam loss rate is calculated from a time-dependent measurement of the total beam intensity. The luminosity is measured via the detection of neutron pairs in time-coincidence in the Zero Degree Calorimeters. Apart from a general interest in verifying the calculations experimentally, an accurate prediction of the losses created in the heavy ion collisions is of practical interest for the LHC, where collision products have the potential to quench cryogenically cooled magnets.
Corrigendum and addendum. Modeling weakly nonlinear acoustic wave propagation
Christov, Ivan; Christov, C. I.; Jordan, P. M.
2014-12-18
This article presents errors, corrections, and additions to the research outlined in the following citation: Christov, I., Christov, C. I., & Jordan, P. M. (2007). Modeling weakly nonlinear acoustic wave propagation. The Quarterly Journal of Mechanics and Applied Mathematics, 60(4), 473-495.
Consistency and convergence for numerical radiation conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1990-01-01
The problem of imposing radiation conditions at artificial boundaries for the numerical simulation of wave propagation is considered. Emphasis is on the behavior and analysis of the error which results from the restriction of the domain. The theory of error estimation is briefly outlined for boundary conditions. Use is made of the asymptotic analysis of propagating wave groups to derive and analyze boundary operators. For dissipative problems this leads to local, accurate conditions, but falls short in the hyperbolic case. A numerical experiment on the solution of the wave equation with cylindrical symmetry is described. A unified presentation of a number of conditions which have been proposed in the literature is given and the time dependence of the error which results from their use is displayed. The results are in qualitative agreement with theoretical considerations. It was found, however, that for this model problem it is particularly difficult to force the error to decay rapidly in time.
Accounting for uncertainty in DNA sequencing data.
O'Rawe, Jason A; Ferson, Scott; Lyon, Gholson J
2015-02-01
Science is defined in part by an honest exposition of the uncertainties that arise in measurements and propagate through calculations and inferences, so that the reliabilities of its conclusions are made apparent. The recent rapid development of high-throughput DNA sequencing technologies has dramatically increased the number of measurements made at the biochemical and molecular level. These data come from many different DNA-sequencing technologies, each with their own platform-specific errors and biases, which vary widely. Several statistical studies have tried to measure error rates for basic determinations, but there are no general schemes to project these uncertainties so as to assess the surety of the conclusions drawn about genetic, epigenetic, and more general biological questions. We review here the state of uncertainty quantification in DNA sequencing applications, describe sources of error, and propose methods that can be used for accounting and propagating these errors and their uncertainties through subsequent calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.
An error analysis perspective for patient alignment systems.
Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann
2013-09-01
This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Measurements of Aperture Averaging on Bit-Error-Rate
NASA Technical Reports Server (NTRS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.;
2005-01-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Measurements of aperture averaging on bit-error-rate
NASA Astrophysics Data System (ADS)
Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert
2005-08-01
We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 m. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.
Propagation of angular errors in two-axis rotation systems
NASA Astrophysics Data System (ADS)
Torrington, Geoffrey K.
2003-10-01
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
Prediction of transmission distortion for wireless video communication: analysis.
Chen, Zhifeng; Wu, Dapeng
2012-03-01
Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.
The Drag-based Ensemble Model (DBEM) for Coronal Mass Ejection Propagation
NASA Astrophysics Data System (ADS)
Dumbović, Mateja; Čalogović, Jaša; Vršnak, Bojan; Temmer, Manuela; Mays, M. Leila; Veronig, Astrid; Piantschitsch, Isabell
2018-02-01
The drag-based model for heliospheric propagation of coronal mass ejections (CMEs) is a widely used analytical model that can predict CME arrival time and speed at a given heliospheric location. It is based on the assumption that the propagation of CMEs in interplanetary space is solely under the influence of magnetohydrodynamical drag, where CME propagation is determined based on CME initial properties as well as the properties of the ambient solar wind. We present an upgraded version, the drag-based ensemble model (DBEM), that covers ensemble modeling to produce a distribution of possible ICME arrival times and speeds. Multiple runs using uncertainty ranges for the input values can be performed in almost real-time, within a few minutes. This allows us to define the most likely ICME arrival times and speeds, quantify prediction uncertainties, and determine forecast confidence. The performance of the DBEM is evaluated and compared to that of ensemble WSA-ENLIL+Cone model (ENLIL) using the same sample of events. It is found that the mean error is ME = ‑9.7 hr, mean absolute error MAE = 14.3 hr, and root mean square error RMSE = 16.7 hr, which is somewhat higher than, but comparable to ENLIL errors (ME = ‑6.1 hr, MAE = 12.8 hr and RMSE = 14.4 hr). Overall, DBEM and ENLIL show a similar performance. Furthermore, we find that in both models fast CMEs are predicted to arrive earlier than observed, most likely owing to the physical limitations of models, but possibly also related to an overestimation of the CME initial speed for fast CMEs.
Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.
Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki
2014-11-01
Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.
Concurrent remote entanglement with quantum error correction against photon losses
NASA Astrophysics Data System (ADS)
Roy, Ananda; Stone, A. Douglas; Jiang, Liang
2016-09-01
Remote entanglement of distant, noninteracting quantum entities is a key primitive for quantum information processing. We present a protocol to remotely entangle two stationary qubits by first entangling them with propagating ancilla qubits and then performing a joint two-qubit measurement on the ancillas. Subsequently, single-qubit measurements are performed on each of the ancillas. We describe two continuous variable implementations of the protocol using propagating microwave modes. The first implementation uses propagating Schr o ̈ dinger cat states as the flying ancilla qubits, a joint-photon-number-modulo-2 measurement of the propagating modes for the two-qubit measurement, and homodyne detections as the final single-qubit measurements. The presence of inefficiencies in realistic quantum systems limit the success rate of generating high fidelity Bell states. This motivates us to propose a second continuous variable implementation, where we use quantum error correction to suppress the decoherence due to photon loss to first order. To that end, we encode the ancilla qubits in superpositions of Schrödinger cat states of a given photon-number parity, use a joint-photon-number-modulo-4 measurement as the two-qubit measurement, and homodyne detections as the final single-qubit measurements. We demonstrate the resilience of our quantum-error-correcting remote entanglement scheme to imperfections. Further, we describe a modification of our error-correcting scheme by incorporating additional individual photon-number-modulo-2 measurements of the ancilla modes to improve the success rate of generating high-fidelity Bell states. Our protocols can be straightforwardly implemented in state-of-the-art superconducting circuit-QED systems.
Yuan, Shen-fang; Jin, Xin; Qiu, Lei; Huang, Hong-mei
2015-03-01
In order to improve the security of aircraft repaired structures, a method of crack propagation monitoring in repaired structures is put forward basing on characteristics of Fiber Bragg Grating (FBG) reflecting spectra in this article. With the cyclic loading effecting on repaired structure, cracks propagate, while non-uniform strain field appears nearby the tip of crack which leads to the FBG sensors' reflecting spectra deformations. The crack propagating can be monitored by extracting the characteristics of FBG sensors' reflecting spectral deformations. A finite element model (FEM) of the specimen is established. Meanwhile, the distributions of strains which are under the action of cracks of different angles and lengths are obtained. The characteristics, such as main peak wavelength shift, area of reflecting spectra, second and third peak value and so on, are extracted from the FBGs' reflecting spectral which are calculated by transfer matrix algorithm. An artificial neural network is built to act as the model between the characteristics of the reflecting spectral and the propagation of crack. As a result, the crack propagation of repaired structures is monitored accurately and the error of crack length is less than 0.5 mm, the error of crack angle is less than 5 degree. The accurately monitoring problem of crack propagation of repaired structures is solved by taking use of this method. It has important significance in aircrafts safety improvement and maintenance cost reducing.
NASA Astrophysics Data System (ADS)
Dangelmayr, Martin A.; Reimus, Paul W.; Johnson, Raymond H.; Clay, James T.; Stone, James J.
2018-06-01
This research assesses the ability of a GC SCM to simulate uranium transport under variable geochemical conditions typically encountered at uranium in-situ recovery (ISR) sites. Sediment was taken from a monitoring well at the SRH site at depths 192 and 193 m below ground and characterized by XRD, XRF, TOC, and BET. Duplicate column studies on the different sediment depths, were flushed with synthesized restoration waters at two different alkalinities (160 mg/l CaCO3 and 360 mg/l CaCO3) to study the effect of alkalinity on uranium mobility. Uranium breakthrough occurred 25% - 30% earlier in columns with 360 mg/l CaCO3 over columns fed with 160 mg/l CaCO3 influent water. A parameter estimation program (PEST) was coupled to PHREEQC to derive site densities from experimental data. Significant parameter fittings were produced for all models, demonstrating that the GC SCM approach can model the impact of carbonate on uranium in flow systems. Derived site densities for the two sediment depths were between 141 and 178 μmol-sites/kg-soil, demonstrating similar sorption capacities despite heterogeneity in sediment mineralogy. Model sensitivity to alkalinity and pH was shown to be moderate compared to fitted site densities, when calcite saturation was allowed to equilibrate. Calcite kinetics emerged as a potential source of error when fitting parameters in flow conditions. Fitted results were compared to data from previous batch and column studies completed on sediments from the Smith-Ranch Highland (SRH) site, to assess variability in derived parameters. Parameters from batch experiments were lower by a factor of 1.1 to 3.4 compared to column studies completed on the same sediments. The difference was attributed to errors in solid-solution ratios and the impact of calcite dissolution in batch experiments. Column studies conducted at two different laboratories showed almost an order of magnitude difference in fitted site densities suggesting that experimental methodology may play a bigger role in column sorption behavior than actual sediment heterogeneity. Our results demonstrate the necessity for ISR sites to remove residual pCO2 and equilibrate restoration water with background geochemistry to reduce uranium mobility. In addition, the observed variability between fitted parameters on the same sediments highlights the need to provide standardized guidelines and methodology for regulators and industry when the GC SCM approach is used for ISR risk assessments.
Dangelmayr, Martin A; Reimus, Paul W; Johnson, Raymond H; Clay, James T; Stone, James J
2018-06-01
This research assesses the ability of a GC SCM to simulate uranium transport under variable geochemical conditions typically encountered at uranium in-situ recovery (ISR) sites. Sediment was taken from a monitoring well at the SRH site at depths 192 and 193 m below ground and characterized by XRD, XRF, TOC, and BET. Duplicate column studies on the different sediment depths, were flushed with synthesized restoration waters at two different alkalinities (160 mg/l CaCO 3 and 360 mg/l CaCO 3 ) to study the effect of alkalinity on uranium mobility. Uranium breakthrough occurred 25% - 30% earlier in columns with 360 mg/l CaCO 3 over columns fed with 160 mg/l CaCO 3 influent water. A parameter estimation program (PEST) was coupled to PHREEQC to derive site densities from experimental data. Significant parameter fittings were produced for all models, demonstrating that the GC SCM approach can model the impact of carbonate on uranium in flow systems. Derived site densities for the two sediment depths were between 141 and 178 μmol-sites/kg-soil, demonstrating similar sorption capacities despite heterogeneity in sediment mineralogy. Model sensitivity to alkalinity and pH was shown to be moderate compared to fitted site densities, when calcite saturation was allowed to equilibrate. Calcite kinetics emerged as a potential source of error when fitting parameters in flow conditions. Fitted results were compared to data from previous batch and column studies completed on sediments from the Smith-Ranch Highland (SRH) site, to assess variability in derived parameters. Parameters from batch experiments were lower by a factor of 1.1 to 3.4 compared to column studies completed on the same sediments. The difference was attributed to errors in solid-solution ratios and the impact of calcite dissolution in batch experiments. Column studies conducted at two different laboratories showed almost an order of magnitude difference in fitted site densities suggesting that experimental methodology may play a bigger role in column sorption behavior than actual sediment heterogeneity. Our results demonstrate the necessity for ISR sites to remove residual pCO2 and equilibrate restoration water with background geochemistry to reduce uranium mobility. In addition, the observed variability between fitted parameters on the same sediments highlights the need to provide standardized guidelines and methodology for regulators and industry when the GC SCM approach is used for ISR risk assessments. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lechtenberg, Travis; McLaughlin, Craig A.; Locke, Travis; Krishna, Dhaval Mysore
2013-01-01
paper examines atmospheric density estimated using precision orbit ephemerides (POE) from the CHAMP and GRACE satellites during short periods of greater atmospheric density variability. The results of the calibration of CHAMP densities derived using POEs with those derived using accelerometers are examined for three different types of density perturbations, [traveling atmospheric disturbances (TADs), geomagnetic cusp phenomena, and midnight density maxima] in order to determine the temporal resolution of POE solutions. In addition, the densities are compared to High-Accuracy Satellite Drag Model (HASDM) densities to compare temporal resolution for both types of corrections. The resolution for these models of thermospheric density was found to be inadequate to sufficiently characterize the short-term density variations examined here. Also examined in this paper is the effect of differing density estimation schemes by propagating an initial orbit state forward in time and examining induced errors. The propagated POE-derived densities incurred errors of a smaller magnitude than the empirical models and errors on the same scale or better than those incurred using the HASDM model.
NASA Astrophysics Data System (ADS)
Tissot, François L. H.; Dauphas, Nicolas
2015-10-01
The 238U/235U isotopic composition of uranium in seawater can provide important insights into the modern U budget of the oceans. Using the double spike technique and a new data reduction method, we analyzed an array of seawater samples and 41 geostandards covering a broad range of geological settings relevant to low and high temperature geochemistry. Analyses of 18 seawater samples from geographically diverse sites from the Atlantic and Pacific oceans, Mediterranean Sea, Gulf of Mexico, Persian Gulf, and English Channel, together with literature data (n = 17), yield a δ238U value for modern seawater of -0.392 ± 0.005‰ relative to CRM-112a. Measurements of the uranium isotopic compositions of river water, lake water, evaporites, modern coral, shales, and various igneous rocks (n = 64), together with compilations of literature data (n = 380), allow us to estimate the uranium isotopic compositions of the various reservoirs involved in the modern oceanic uranium budget, as well as the fractionation factors associated with U incorporation into those reservoirs. Because the incorporation of U into anoxic/euxinic sediments is accompanied by large isotopic fractionation (ΔAnoxic/Euxinic-SW = +0.6‰), the size of the anoxic/euxinic sink strongly influences the δ238U value of seawater. Keeping all other fluxes constant, the flux of uranium in the anoxic/euxinic sink is constrained to be 7.0 ± 3.1 Mmol/yr (or 14 ± 3% of the total flux out of the ocean). This translates into an areal extent of anoxia into the modern ocean of 0.21 ± 0.09% of the total seafloor. This agrees with independent estimates and rules out a recent uranium budget estimate by Henderson and Anderson (2003). Using the mass fractions and isotopic compositions of various rock types in Earth's crust, we further calculate an average δ238U isotopic composition for the continental crust of -0.29 ± 0.03‰ corresponding to a 238U/235U isotopic ratio of 137.797 ± 0.005. We discuss the implications of the variability of the 238U/235U ratio on Pb-Pb and U-Pb ages and provide analytical formulas to calculate age corrections as a function of the age and isotopic composition of the sample. The crustal ratio may be used in calculation of Pb-Pb and U-Pb ages of continental crust rocks and minerals when the U isotopic composition is unknown. In cosmochemistry, the search for 247Cm (t1/2 = 15.6 Myr), an extinct short-lived radionuclide that decays into 235U, is important for understanding how r-process nuclides were synthesized in stars and learning about the astrophysical context of solar system formation (Chen and Wasserburg, 1981; Wasserburg et al., 1996; Nittler and Dauphas, 2006; Brennecka et al., 2010b; Tissot et al., 2015). In both terrestrial and extraterrestrial samples, variations in the 238U/235U ratio affect Pb-Pb ages (and depending on the analytical protocols, U-Pb ages). Therefore, samples dated by these techniques need to have their U isotopic compositions measured (Stirling et al., 2005, 2006; Weyer et al., 2008; Amelin et al., 2010; Brennecka et al., 2010b; Brennecka and Wadhwa, 2012; Connelly et al., 2012; Goldmann et al., 2015) or uncertainties on the U isotopic composition should be propagated into age calculations. In low temperature aqueous geochemistry, U isotopic fractionation between U4+ and U6+ (driven in part by nuclear field shift effects; Bigeleisen, 1996; Schauble, 2007; Abe et al., 2008), makes U isotopes potential tracers of paleoredox conditions (Montoya-Pino et al., 2010; Brennecka et al., 2011a; Kendall et al., 2013, 2015; Asael et al., 2013; Andersen et al., 2014; Dahl et al., 2014; Goto et al., 2014; Noordmann et al., 2015). The present paper aims at constraining some aspects of the global budget of uranium in the modern oceans using 238U/235U isotope variations, which involves characterizing the U isotopic composition of seawater and several reservoirs involved in the uranium oceanic budget. Uranium can exist in two oxidation states in terrestrial surface environments: U4+ is insoluble in seawater while U6+ is soluble (Langmuir, 1978). The contrasting behaviors of the two oxidation states of uranium explains why the disappearance of detrital uraninite after the Archean marks the rise of oxygen in Earth's atmosphere/hydrosphere (Ramdohr, 1958; Rasmussen and Buick, 1999; Frimmel, 2005). More recently, significant effort has focused on using U isotopes to constrain the past extents of anoxic/euxinic vs. oxic or suboxic sediments in modern and ancient oceans (Montoya-Pino et al., 2010; Brennecka et al., 2011a; Asael et al., 2013; Kendall et al., 2013, 2015; Andersen et al., 2014; Dahl et al., 2014; Goto et al., 2014; Noordmann et al., 2015). A virtue of this system is that it can potentially reflect the global redox state of Earth's oceans. At the same time, several difficulties have been encountered in applying U isotopes as paleo-redox indicators. For example, detrital contributions can blur the authigenic signal and have to be corrected for (Asael et al., 2013; Andersen et al., 2014; Noordmann et al., 2015), uranium isotopes can be affected by diagenesis and exchange with porewater (Romaniello et al., 2013; Andersen et al., 2014), and the exact isotopic fractionation factors relevant to various conditions of deposition are uncertain. While significant progress has already been made to address these difficulties (Asael et al., 2013; Romaniello et al., 2013; Andersen et al., 2014; Noordmann et al., 2015), this system and others are missing some of the groundwork studies on modern environments that are needed to gain trust in their applications to ancient sediments.In the modern ocean, water-soluble uranium behaves conservatively (i.e., U concentration correlates linearly to water salinity, Ku et al., 1977; Owens et al., 2011) and has a long residence time of ∼400 kyr (Ku et al., 1977). The ocean is therefore a large repository of uranium, exceeding the total inventory of land-based deposits (Lu, 2014). The riverine input (40-46 Mmol/yr) is balanced by several sinks; including suboxic sediments, anoxic/euxinic sediments, carbonates, altered oceanic crust, salt marshes and Fe-Mn nodules. Barnes and Cochran (1990), Morford and Emerson (1999), Dunk et al. (2002), and Henderson and Anderson (2003) each proposed estimates for the oceanic uranium budget that differ substantially in the fluxes that they use. Uranium isotopes are sensitive to ocean redox conditions because uranium removal in anoxic/euxinic sediments imparts large uranium isotopic fractionation, so that the areal extent of this sink influences greatly the U isotopic composition of seawater relative to the riverine input. In the present paper, we report double-spike uranium isotopic measurements of 18 seawater samples, 18 continental crust lithologies, 7 individual minerals, 6 oyster samples, 3 modern evaporites samples, 2 lake water samples, 1 large river water sample and 1 coral sample. These measurements are supplemented by compilations of literature data. With this large data set (n = 444), we are able to constrain the flux of uranium into anoxic/euxinic sediments, as well as the global extent of anoxia in the modern ocean (percent of seafloor covered by anoxic/euxinic sediments). Our findings compare well with independent estimates and rule out the most recent U budget of Henderson and Anderson (2003).As part of our effort, we also present a data reduction method for double-spike measurements that is both comprehensive in the way the errors are propagated and simple to implement.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
2017-01-01
This paper presents results from numerical experiments for controlling the error caused by a damping layer boundary treatment when simulating the propagation of an acoustic signal from a continuous pressure source. The computations are with the 2D Linearized Euler Equations (LEE) for both a uniform mean flow and a steady parallel jet. The numerical experiments are with algorithms that are third, fifth, seventh and ninth order accurate in space and time. The numerical domain is enclosed in a damping layer boundary treatment. The damping is implemented in a time accurate manner, with simple polynomial damping profiles of second, fourth, sixth and eighth power. At the outer boundaries of the damping layer the propagating solution is uniformly set to zero. The complete boundary treatment is remarkably simple and intrinsically independant from the dimension of the spatial domain. The reported results show the relative effect on the error from the boundary treatment by varying the damping layer width, damping profile power, damping amplitude, propagtion time, grid resolution and algorithm order. The issue that is being addressed is not the accuracy of the numerical solution when compared to a mathematical solution, but the effect of the complete boundary treatment on the numerical solution, and to what degree the error in the numerical solution from the complete boundary treatment can be controlled. We report maximum relative absolute errors from just the boundary treatment that range from O[10-2] to O[10-7].
Accounting for apparent deviations between calorimetric and van't Hoff enthalpies.
Kantonen, Samuel A; Henriksen, Niel M; Gilson, Michael K
2018-03-01
In theory, binding enthalpies directly obtained from calorimetry (such as ITC) and the temperature dependence of the binding free energy (van't Hoff method) should agree. However, previous studies have often found them to be discrepant. Experimental binding enthalpies (both calorimetric and van't Hoff) are obtained for two host-guest pairs using ITC, and the discrepancy between the two enthalpies is examined. Modeling of artificial ITC data is also used to examine how different sources of error propagate to both types of binding enthalpies. For the host-guest pairs examined here, good agreement, to within about 0.4kcal/mol, is obtained between the two enthalpies. Additionally, using artificial data, we find that different sources of error propagate to either enthalpy uniquely, with concentration error and heat error propagating primarily to calorimetric and van't Hoff enthalpies, respectively. With modern calorimeters, good agreement between van't Hoff and calorimetric enthalpies should be achievable, barring issues due to non-ideality or unanticipated measurement pathologies. Indeed, disagreement between the two can serve as a flag for error-prone datasets. A review of the underlying theory supports the expectation that these two quantities should be in agreement. We address and arguably resolve long-standing questions regarding the relationship between calorimetric and van't Hoff enthalpies. In addition, we show that comparison of these two quantities can be used as an internal consistency check of a calorimetry study. Copyright © 2017 Elsevier B.V. All rights reserved.
Bao, Chen; Wu, Hongfei; Li, Li; Newcomer, Darrell; Long, Philip E; Williams, Kenneth H
2014-09-02
We aim to understand the scale-dependent evolution of uranium bioreduction during a field experiment at a former uranium mill site near Rifle, Colorado. Acetate was injected to stimulate Fe-reducing bacteria (FeRB) and to immobilize aqueous U(VI) to insoluble U(IV). Bicarbonate was coinjected in half of the domain to mobilize sorbed U(VI). We used reactive transport modeling to integrate hydraulic and geochemical data and to quantify rates at the grid block (0.25 m) and experimental field scale (tens of meters). Although local rates varied by orders of magnitude in conjunction with biostimulation fronts propagating downstream, field-scale rates were dominated by those orders of magnitude higher rates at a few selected hot spots where Fe(III), U(VI), and FeRB were at their maxima in the vicinity of the injection wells. At particular locations, the hot moments with maximum rates negatively corresponded to their distance from the injection wells. Although bicarbonate injection enhanced local rates near the injection wells by a maximum of 39.4%, its effect at the field scale was limited to a maximum of 10.0%. We propose a rate-versus-measurement-length relationship (log R' = -0.63 log L - 2.20, with R' in μmol/mg cell protein/day and L in meters) for orders-of-magnitude estimation of uranium bioreduction rates across scales.
Optimal strategies for throwing accurately
2017-01-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed–accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error. PMID:28484641
Optimal strategies for throwing accurately
NASA Astrophysics Data System (ADS)
Venkadesan, M.; Mahadevan, L.
2017-04-01
The accuracy of throwing in games and sports is governed by how errors in planning and initial conditions are propagated by the dynamics of the projectile. In the simplest setting, the projectile path is typically described by a deterministic parabolic trajectory which has the potential to amplify noisy launch conditions. By analysing how parabolic trajectories propagate errors, we show how to devise optimal strategies for a throwing task demanding accuracy. Our calculations explain observed speed-accuracy trade-offs, preferred throwing style of overarm versus underarm, and strategies for games such as dart throwing, despite having left out most biological complexities. As our criteria for optimal performance depend on the target location, shape and the level of uncertainty in planning, they also naturally suggest an iterative scheme to learn throwing strategies by trial and error.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Evaluation of a wave-vector-frequency-domain method for nonlinear wave propagation
Jing, Yun; Tao, Molei; Clement, Greg T.
2011-01-01
A wave-vector-frequency-domain method is presented to describe one-directional forward or backward acoustic wave propagation in a nonlinear homogeneous medium. Starting from a frequency-domain representation of the second-order nonlinear acoustic wave equation, an implicit solution for the nonlinear term is proposed by employing the Green’s function. Its approximation, which is more suitable for numerical implementation, is used. An error study is carried out to test the efficiency of the model by comparing the results with the Fubini solution. It is shown that the error grows as the propagation distance and step-size increase. However, for the specific case tested, even at a step size as large as one wavelength, sufficient accuracy for plane-wave propagation is observed. A two-dimensional steered transducer problem is explored to verify the nonlinear acoustic field directional independence of the model. A three-dimensional single-element transducer problem is solved to verify the forward model by comparing it with an existing nonlinear wave propagation code. Finally, backward-projection behavior is examined. The sound field over a plane in an absorptive medium is backward projected to the source and compared with the initial field, where good agreement is observed. PMID:21302985
Verifying Parentage and Confirming Identity in Blackberry with a Fingerprinting Set
USDA-ARS?s Scientific Manuscript database
Parentage and identity confirmation is an important aspect of clonally propagated crops outcrossing. Potential errors resulting misidentification include off-type pollination events, labeling errors, or sports of clones. DNA fingerprinting sets are an excellent solution to quickly identify off-type ...
NASA Technical Reports Server (NTRS)
Borgia, Andrea; Spera, Frank J.
1990-01-01
This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.
Contribution of uranium to gross alpha radioactivity in some environmental samples in Kuwait
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bou-Rabee, F.; Bakir, Y.; Bem, H.
1995-08-01
This study was done in connection with the use of uranium-tipped antitank shells during the Gulf War and possible contamination of the environment of Kuwait. It was found that uranium concentrations in the soil samples ranged from 0.3 {mu}g/g to 1.85 {mu}g/g. The average value of 0.7 {mu}g/g was lower than the world average value of 2.1 {mu}g/g for surface soils. Its contribution to the total natural alpha radioactivity (excluding Rn and its short-lived daughters) varied from 1.1% to 14%. The solid fall-out samples showed higher uranium concentration which varied from 0.35 {mu}g/g to 1.73 {mu}/g (average 1.47 {mu}g/g) butmore » its contribution to the gross alpha radioactivity was in the same range, from 1.1 to 13.2%. The difference in the concentration of uranium in suspended air matter samples during the summer of 1993 and the winter of 1994 was found to be 2.0 {mu}g/g and 1.0 {mu}g/g, respectively. The uranium contribution to the natural alpha radioactivity in these samples was in the same range but lower for the winter period. The isotopic ratio of {sup 235}U to {sup 238}U for the measured samples was basically within an experimental error of {+-}0.001, close to the theoretical value of 0.007. The calculated total annual intake of uranium via inhalation for the Kuwait population was 0.07 Bq, e.g., 0.2% of the annual limit on intake. 13 refs., 1 fig., 3 tabs.« less
Bland, D; Rona, R; Coggon, D; Anderson, J; Greenberg, N; Hull, L; Wessely, S
2007-01-01
Objectives To assess the distribution and risk factors of depleted uranium uptake in military personnel who had taken part in the invasion of Iraq in 2003. Methods Sector field inductively coupled plasma-mass spectrometry (SF-ICP-MS) was used to determine the uranium concentration and 238U/235U isotopic ratio in spot urine samples. The authors collected urine samples from four groups identified a priori as having different potential for exposure to depleted uranium. These groups were: combat personnel (n = 199); non-combat personnel (n = 96); medical personnel (n = 22); and “clean-up” personnel (n = 24) who had been involved in the maintenance, repair or clearance of potentially contaminated vehicles in Iraq. A short questionnaire was used to ascertain individual experience of circumstances in which depleted uranium exposure might have occurred. Results There was no statistically significant difference in the 238U/235U ratio between groups. Mean ratios by group varied from 138.0 (95% CI 137.3 to 138.7) for clean-up personnel to 138.2 (95% CI 138.0 to 138.5) for combat personnel, and were close to the ratio of 137.9 for natural uranium. The two highest individual ratios (146.9 and 147.7) were retested using more accurate, multiple collector inductively coupled plasma-mass spectrometry (MC-ICP-MS) and found to be within measurement of error of that for natural uranium. There were no significant differences in isotope ratio between participants according to self-reported circumstances of potential depleted uranium exposure. Conclusions Based on measurements using a SF-ICP-MS apparatus, this study provides reassurance following concern for potential widespread depleted uranium uptake in the UK military. The rare occurrence of elevated ratios may reflect the limits of accuracy of the SF-ICP-MS apparatus and not a real increase from the natural proportions of the isotopes. Any uptake of depleted uranium among participants in this study sample would be very unlikely to have any implications for health. PMID:17609224
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
Space-Borne Laser Altimeter Geolocation Error Analysis
NASA Astrophysics Data System (ADS)
Wang, Y.; Fang, J.; Ai, Y.
2018-05-01
This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.
Factoring uncertainty into restoration modeling of in-situ leach uranium mines
Johnson, Raymond H.; Friedel, Michael J.
2009-01-01
Postmining restoration is one of the greatest concerns for uranium in-situ leach (ISL) mining operations. The ISL-affected aquifer needs to be returned to conditions specified in the mining permit (either premining or other specified conditions). When uranium ISL operations are completed, postmining restoration is usually achieved by injecting reducing agents into the mined zone. The objective of this process is to restore the aquifer to premining conditions by reducing the solubility of uranium and other metals in the ground water. Reactive transport modeling is a potentially useful method for simulating the effectiveness of proposed restoration techniques. While reactive transport models can be useful, they are a simplification of reality that introduces uncertainty through the model conceptualization, parameterization, and calibration processes. For this reason, quantifying the uncertainty in simulated temporal and spatial hydrogeochemistry is important for postremedial risk evaluation of metal concentrations and mobility. Quantifying the range of uncertainty in key predictions (such as uranium concentrations at a specific location) can be achieved using forward Monte Carlo or other inverse modeling techniques (trial-and-error parameter sensitivity, calibration constrained Monte Carlo). These techniques provide simulated values of metal concentrations at specified locations that can be presented as nonlinear uncertainty limits or probability density functions. Decisionmakers can use these results to better evaluate environmental risk as future metal concentrations with a limited range of possibilities, based on a scientific evaluation of uncertainty.
Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.
Pathak, Biswajit; Boruah, Bosanta R
2017-12-01
Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.
Bias Reduction and Filter Convergence for Long Range Stereo
NASA Technical Reports Server (NTRS)
Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav
2005-01-01
We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.
Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling
NASA Astrophysics Data System (ADS)
Sung, Chih-Jen; Niemeyer, Kyle E.
2010-05-01
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, Floyd E.; Hu, Lin-wen; Wilson, Erik
The STAT code was written to automate many of the steady-state thermal hydraulic safety calculations for the MIT research reactor, both for conversion of the reactor from high enrichment uranium fuel to low enrichment uranium fuel and for future fuel re-loads after the conversion. A Monte-Carlo statistical propagation approach is used to treat uncertainties in important parameters in the analysis. These safety calculations are ultimately intended to protect against high fuel plate temperatures due to critical heat flux or departure from nucleate boiling or onset of flow instability; but additional margin is obtained by basing the limiting safety settings onmore » avoiding onset of nucleate boiling. STAT7 can simultaneously analyze all of the axial nodes of all of the fuel plates and all of the coolant channels for one stripe of a fuel element. The stripes run the length of the fuel, from the bottom to the top. Power splits are calculated for each axial node of each plate to determine how much of the power goes out each face of the plate. By running STAT7 multiple times, full core analysis has been performed by analyzing the margin to ONB for each axial node of each stripe of each plate of each element in the core.« less
Detecting and preventing error propagation via competitive learning.
Silva, Thiago Christiano; Zhao, Liang
2013-05-01
Semisupervised learning is a machine learning approach which is able to employ both labeled and unlabeled samples in the training process. It is an important mechanism for autonomous systems due to the ability of exploiting the already acquired information and for exploring the new knowledge in the learning space at the same time. In these cases, the reliability of the labels is a crucial factor, because mislabeled samples may propagate wrong labels to a portion of or even the entire data set. This paper has the objective of addressing the error propagation problem originated by these mislabeled samples by presenting a mechanism embedded in a network-based (graph-based) semisupervised learning method. Such a procedure is based on a combined random-preferential walk of particles in a network constructed from the input data set. The particles of the same class cooperate among them, while the particles of different classes compete with each other to propagate class labels to the whole network. Computer simulations conducted on synthetic and real-world data sets reveal the effectiveness of the model. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas
2016-12-01
There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The systematic comparisons of 12 models provided valuable evidence that the respective error-propagation was not only determined by the degree of positional inaccuracy inherent in the landslide data, but also by the spatial representation of landslides and the environment, landslide magnitude, the characteristics of the study area, the selected classification method and an interplay of predictors within multiple variable models. Based on the results, we deduced that a direct propagation of minor to moderate inventory-based positional errors into modelling results can be partly counteracted by adapting the modelling design (e.g. generalization of input data, opting for strongly generalizing classifiers). Since positional errors within landslide inventories are common and subsequent modelling and validation results are likely to be distorted, the potential existence of inventory-based positional inaccuracies should always be considered when assessing landslide susceptibility by means of empirical models.
Mutual optical intensity propagation through non-ideal mirrors
Meng, Xiangyu; Shi, Xianbo; Wang, Yong; ...
2017-08-18
The mutual optical intensity (MOI) model is extended to include the propagation of partially coherent radiation through non-ideal mirrors. The propagation of the MOI from the incident to the exit plane of the mirror is realised by local ray tracing. The effects of figure errors can be expressed as phase shifts obtained by either the phase projection approach or the direct path length method. Using the MOI model, the effects of figure errors are studied for diffraction-limited cases using elliptical cylinder mirrors. Figure errors with low spatial frequencies can vary the intensity distribution, redistribute the local coherence function and distortmore » the wavefront, but have no effect on the global degree of coherence. The MOI model is benchmarked againstHYBRIDand the multi-electronSynchrotron Radiation Workshop(SRW) code. The results show that the MOI model gives accurate results under different coherence conditions of the beam. Other than intensity profiles, the MOI model can also provide the wavefront and the local coherence function at any location along the beamline. The capability of tuning the trade-off between accuracy and efficiency makes the MOI model an ideal tool for beamline design and optimization.« less
Uncertainty quantification in (α,n) neutron source calculations for an oxide matrix
Pigni, M. T.; Croft, S.; Gauld, I. C.
2016-04-25
Here we present a methodology to propagate nuclear data covariance information in neutron source calculations from (α,n) reactions. The approach is applied to estimate the uncertainty in the neutron generation rates for uranium oxide fuel types due to uncertainties on 1) 17,18O( α,n) reaction cross sections and 2) uranium and oxygen stopping power cross sections. The procedure to generate reaction cross section covariance information is based on the Bayesian fitting method implemented in the R-matrix SAMMY code. The evaluation methodology uses the Reich-Moore approximation to fit the 17,18O(α,n) reaction cross-sections in order to derive a set of resonance parameters andmore » a related covariance matrix that is then used to calculate the energydependent cross section covariance matrix. The stopping power cross sections and related covariance information for uranium and oxygen were obtained by the fit of stopping power data in the -energy range of 1 keV up to 12 MeV. Cross section perturbation factors based on the covariance information relative to the evaluated 17,18O( α,n) reaction cross sections, as well as uranium and oxygen stopping power cross sections, were used to generate a varied set of nuclear data libraries used in SOURCES4C and ORIGEN for inventory and source term calculations. The set of randomly perturbed output (α,n) source responses, provide the mean values and standard deviations of the calculated responses reflecting the uncertainties in nuclear data used in the calculations. Lastly, the results and related uncertainties are compared with experiment thick target (α,n) yields for uranium oxide.« less
Corrigendum to “Thermophysical properties of U 3Si 2 to 1773 K”
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Joshua Taylor; Nelson, Andrew Thomas; Dunwoody, John Tyler
2016-12-01
An error was discovered by the authors in the calculation of thermal diffusivity in “Thermophysical properties of U 3Si 2 to 1773 K”. The error was caused by operator error in entry of parameters used to fit the temperature rise versus time model necessary to calculate the thermal diffusivity. Lastly, this error propagated to the calculation of thermal conductivity, leading to values that were 18%–28% larger along with the corresponding calculated Lorenz values.
NASA Astrophysics Data System (ADS)
Li, H. J.; Wei, F. S.; Feng, X. S.; Xie, Y. Q.
2008-09-01
This paper investigates methods to improve the predictions of Shock Arrival Time (SAT) of the original Shock Propagation Model (SPM). According to the classical blast wave theory adopted in the SPM, the shock propagating speed is determined by the total energy of the original explosion together with the background solar wind speed. Noting that there exists an intrinsic limit to the transit times computed by the SPM predictions for a specified ambient solar wind, we present a statistical analysis on the forecasting capability of the SPM using this intrinsic property. Two facts about SPM are found: (1) the error in shock energy estimation is not the only cause of the prediction errors and we should not expect that the accuracy of SPM to be improved drastically by an exact shock energy input; and (2) there are systematic differences in prediction results both for the strong shocks propagating into a slow ambient solar wind and for the weak shocks into a fast medium. Statistical analyses indicate the physical details of shock propagation and thus clearly point out directions of the future improvement of the SPM. A simple modification is presented here, which shows that there is room for improvement of SPM and thus that the original SPM is worthy of further development.
Temporal scaling in information propagation.
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-18
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Temporal scaling in information propagation
NASA Astrophysics Data System (ADS)
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
QUANTIFYING UNCERTAINTY IN NET PRIMARY PRODUCTION MEASUREMENTS
Net primary production (NPP, e.g., g m-2 yr-1), a key ecosystem attribute, is estimated from a combination of other variables, e.g. standing crop biomass at several points in time, each of which is subject to errors in their measurement. These errors propagate as the variables a...
NASA Astrophysics Data System (ADS)
Dubovsky, O. A.; Semenov, V. A.; Orlov, A. V.; Sudarev, V. V.
2014-09-01
The microdynamics of large-amplitude nonlinear vibrations of uranium nitride diatomic lattices has been investigated using the computer simulation and neutron scattering methods at temperatures T = 600-2500°C near the thresholds of the dissociation and destruction of the reactor fuel materials. It has been found using the computer simulation that, in the spectral gap between the frequency bands of acoustic and optical phonons in crystals with an open surface, there are resonances of new-type harmonic surface vibrations and a gap-filling band of their genetic successors, i.e., nonlinear surface vibrations. Experimental measurements of the slow neutron scattering spectra of uranium nitride on the DIN-2PI neutron spectrometer have revealed resonances and bands of these surface vibrations in the spectral gap, as well as higher optical vibration overtones. It has been shown that the solitons and bisolitons initiate the formation and collapse of dynamic pores with the generation of surface vibrations at the boundaries of the cavities, evaporation of atoms and atomic clusters, formation of cracks, and destruction of the material. It has been demonstrated that the mass transfer of nitrogen in cracks and along grain boundaries can occur through the revealed microdynamics mechanism of the surfing diffusion of light nitrogen atoms at large-amplitude soliton waves propagating in the stabilizing sublattice of heavy uranium atoms and in the nitrogen sublattice.
Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2
2016-06-01
outgoing error propagation condition declara- tions (see Section 5.2.2). The declaration consists of a source error behavior state, possibly anno - tated...2012. [Feiler 2013] Feiler, P. H.; Goodenough, J . B.; Gurfinkel, A.; Weinstock, C. B.; & Wrage, L. Four Pillars for Improving the Quality of...May 2002. [Paige 2009] Paige, Richard F.; Rose, Louis M.; Ge, Xiaocheng; Kolovos, Dimitrios S.; & Brooke, Phillip J . FPTC: Automated Safety
TIME SIGNALS, * SYNCHRONIZATION (ELECTRONICS)), NETWORKS, FREQUENCY, STANDARDS, RADIO SIGNALS, ERRORS, VERY LOW FREQUENCY, PROPAGATION, ACCURACY, ATOMIC CLOCKS, CESIUM, RADIO STATIONS, NAVAL SHORE FACILITIES
An improved empirical model for diversity gain on Earth-space propagation paths
NASA Technical Reports Server (NTRS)
Hodge, D. B.
1981-01-01
An empirical model was generated to estimate diversity gain on Earth-space propagation paths as a function of Earth terminal separation distance, link frequency, elevation angle, and angle between the baseline and the path azimuth. The resulting model reproduces the entire experimental data set with an RMS error of 0.73 dB.
A Recovery-Oriented Approach to Dependable Services: Repairing Past Errors with System-Wide Undo
2003-12-01
54 4.5.3 Handling propagating paradoxes: the squash interface . . . . . . . . . . . . . . . . . . . 54 4.6 Discussion...84 6.3.3 Compensating for paradoxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 6.3.4 Squashing propagating...the service and comparing the behavior of the replicas to detect and squash misbehaving replicas. While on paper Byzantine fault tolerance may seem to
Pole of rotating analysis of present-day Juan de Fuca plate motion
NASA Technical Reports Server (NTRS)
Nishimura, C.; Wilson, D. S.; Hey, R. N.
1984-01-01
Convergence rates between the Juan de Fuca and North American plates are calculated by means of their relative, present-day pole of rotation. A method of calculating the propagation of errors in addition to the instantaneous poles of rotation is also formulated and applied to determine the Euler pole for Pacific-Juan de Fuca. This pole is vectorially added to previously published poles for North America-Pacific and 'hot spot'-Pacific to obtain North America-Juan de Fuca and 'hot spot'-Juan de Fuca, respectively. The errors associated with these resultant poles are determined by propagating the errors of the two summed angular velocity vectors. Under the assumption that hot spots are fixed with respect to a mantle reference frame, the average absolute velocity of the Juan de Puca plate is computed at approximately 15 mm/yr, thereby making it the slowest-moving of the oceanic plates.
Error begat error: design error analysis and prevention in social infrastructure projects.
Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M
2012-09-01
Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Markstrom, Carol A.; Charley, Perry H.
2003-01-01
Disasters can be defined as catastrophic events that challenge the normal range of human coping ability. The technological/human-caused disaster, a classification of interest in this article, is attributable to human error or misjudgment. Lower socioeconomic status and race intersect in the heightened risk for technological/human-caused disasters…
A variational regularization of Abel transform for GPS radio occultation
NASA Astrophysics Data System (ADS)
Wee, Tae-Kwon
2018-04-01
In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.
Measurement configuration optimization for dynamic metrology using Stokes polarimetry
NASA Astrophysics Data System (ADS)
Liu, Jiamin; Zhang, Chuanwei; Zhong, Zhicheng; Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Liu, Shiyuan
2018-05-01
As dynamic loading experiments such as a shock compression test are usually characterized by short duration, unrepeatability and high costs, high temporal resolution and precise accuracy of the measurements is required. Due to high temporal resolution up to a ten-nanosecond-scale, a Stokes polarimeter with six parallel channels has been developed to capture such instantaneous changes in optical properties in this paper. Since the measurement accuracy heavily depends on the configuration of the probing beam incident angle and the polarizer azimuth angle, it is important to select an optimal combination from the numerous options. In this paper, a systematic error propagation-based measurement configuration optimization method corresponding to the Stokes polarimeter was proposed. The maximal Frobenius norm of the combinatorial matrix of the configuration error propagating matrix and the intrinsic error propagating matrix is introduced to assess the measurement accuracy. The optimal configuration for thickness measurement of a SiO2 thin film deposited on a Si substrate has been achieved by minimizing the merit function. Simulation and experimental results show a good agreement between the optimal measurement configuration achieved experimentally using the polarimeter and the theoretical prediction. In particular, the experimental result shows that the relative error in the thickness measurement can be reduced from 6% to 1% by using the optimal polarizer azimuth angle when the incident angle is 45°. Furthermore, the optimal configuration for the dynamic metrology of a nickel foil under quasi-dynamic loading is investigated using the proposed optimization method.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
Image reduction pipeline for the detection of variable sources in highly crowded fields
NASA Astrophysics Data System (ADS)
Gössl, C. A.; Riffeser, A.
2002-01-01
We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.
A study for systematic errors of the GLA forecast model in tropical regions
NASA Technical Reports Server (NTRS)
Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin
1988-01-01
From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1980-01-01
Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
The Importance of Semi-Major Axis Knowledge in the Determination of Near-Circular Orbits
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Schiesser, Emil R.
1998-01-01
Modem orbit determination has mostly been accomplished using Cartesian coordinates. This usage has carried over in recent years to the use of GPS for satellite orbit determination. The unprecedented positioning accuracy of GPS has tended to focus attention more on the system's capability to locate the spacecraft's location at a particular epoch than on its accuracy in determination of the orbit, per se. As is well-known, the latter depends on a coordinated knowledge of position, velocity, and the correlation between their errors. Failure to determine a properly coordinated position/velocity state vector at a given epoch can lead to an epoch state that does not propagate well, and/or may not be usable for the execution of orbit adjustment maneuvers. For the quite common case of near-circular orbits, the degree to which position and velocity estimates are properly coordinated is largely captured by the error in semi-major axis (SMA) they jointly produce. Figure 1 depicts the relationships among radius error, speed error, and their correlation which exist for a typical low altitude Earth orbit. Two familiar consequences are the relationship Figure 1 shows are the following: (1) downrange position error grows at the per orbit rate of 3(pi) times the SMA error; (2) a velocity change imparted to the orbit will have an error of (pi) divided by the orbit period times the SMA error. A less familiar consequence occurs in the problem of initializing the covariance matrix for a sequential orbit determination filter. An initial covariance consistent with orbital dynamics should be used if the covariance is to propagate well. Properly accounting for the SMA error of the initial state in the construction of the initial covariance accomplishes half of this objective, by specifying the partition of the covariance corresponding to down-track position and radial velocity errors. The remainder of the in-plane covariance partition may be specified in terms of the flight path angle error of the initial state. Figure 2 illustrates the effect of properly and not properly initializing a covariance. This figure was produced by propagating the covariance shown on the plot, without process noise, in a circular low Earth orbit whose period is 5828.5 seconds. The upper subplot, in which the proper relationships among position, velocity, and their correlation has been used, shows overall error growth, in terms of the standard deviations of the inertial position coordinates, of about half of the lower subplot, whose initial covariance was based on other considerations.
Distribution of Pd, Ag & U in the SiC Layer of an Irradiated TRISO Fuel Particle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas M. Lillo; Isabella J. van Rooyen
2014-08-01
The distribution of silver, uranium and palladium in the silicon carbide (SiC) layer of an irradiated TRISO fuel particle was studied using samples extracted from the SiC layer using focused ion beam (FIB) techniques. Transmission electron microscopy in conjunction with energy dispersive x-ray spectroscopy was used to identify the presence of the specific elements of interest at grain boundaries, triple junctions and precipitates in the interior of SiC grains. Details on sample fabrication, errors associated with measurements of elemental migration distances and the distances migrated by silver, palladium and uranium in the SiC layer of an irradiated TRISO particle frommore » the AGR-1 program are reported.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, J.W.
1988-01-01
Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less
NASA Technical Reports Server (NTRS)
Ingels, F.; Schoggen, W. O.
1981-01-01
The various methods of high bit transition density encoding are presented, their relative performance is compared in so far as error propagation characteristics, transition properties and system constraints are concerned. A computer simulation of the system using the specific PN code recommended, is included.
The statistical fluctuation study of quantum key distribution in means of uncertainty principle
NASA Astrophysics Data System (ADS)
Liu, Dunwei; An, Huiyao; Zhang, Xiaoyu; Shi, Xuemei
2018-03-01
Laser defects in emitting single photon, photon signal attenuation and propagation of error cause our serious headaches in practical long-distance quantum key distribution (QKD) experiment for a long time. In this paper, we study the uncertainty principle in metrology and use this tool to analyze the statistical fluctuation of the number of received single photons, the yield of single photons and quantum bit error rate (QBER). After that we calculate the error between measured value and real value of every parameter, and concern the propagation error among all the measure values. We paraphrase the Gottesman-Lo-Lutkenhaus-Preskill (GLLP) formula in consideration of those parameters and generate the QKD simulation result. In this study, with the increase in coding photon length, the safe distribution distance is longer and longer. When the coding photon's length is N = 10^{11}, the safe distribution distance can be almost 118 km. It gives a lower bound of safe transmission distance than without uncertainty principle's 127 km. So our study is in line with established theory, but we make it more realistic.
Impact of device level faults in a digital avionic processor
NASA Technical Reports Server (NTRS)
Suk, Ho Kim
1989-01-01
This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.
Error propagation in energetic carrying capacity models
Pearse, Aaron T.; Stafford, Joshua D.
2014-01-01
Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.
Shamp, Donald D.
2001-01-01
Over the past several decades investigators have extensively examined the 238U-234U- 230Th systematics of a variety of geologic materials using alpha spectroscopy. Analytical uncertainty for 230Th by alpha spectroscopy has been limited to about 2% (2σ). The advantage of thermal ionization mass spectroscopy (TIMS), introduced by Edwards and co-workers in the late 1980’s is the increased detectability of these isotopes by a factor of ~200, and decreases in the uncertainty for 230Th to about 5‰ (2σ) error. This report is a procedural manual for using the USGS-Stanford Finnegan-Mat 262 TIMS to collect and isolate Uranium and Thorium isotopic ratio data. Chemical separation of Uranium and Thorium from the sample media is accomplished using acid dissolution and then processed using anion exchange resins. The Finnegan-Mat262 Thermal Ionization Mass Spectrometer (TIMS) utilizes a surface ionization technique in which nitrates of Uranium and Thorium are placed on a source filament. Upon heating, positive ion emission occurs. The ions are then accelerated and focused into a beam which passes through a curved magnetic field dispersing the ions by mass. Faraday cups and/or an ion counter capture the ions and allow for quantitative analysis of the various isotopes.
An algorithm for propagating the square-root covariance matrix in triangular form
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1976-01-01
A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
Rigorous covariance propagation of geoid errors to geodetic MDT estimates
NASA Astrophysics Data System (ADS)
Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.
2012-04-01
The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Scene Text Recognition using Similarity and a Lexicon with Sparse Belief Propagation
Weinman, Jerod J.; Learned-Miller, Erik; Hanson, Allen R.
2010-01-01
Scene text recognition (STR) is the recognition of text anywhere in the environment, such as signs and store fronts. Relative to document recognition, it is challenging because of font variability, minimal language context, and uncontrolled conditions. Much information available to solve this problem is frequently ignored or used sequentially. Similarity between character images is often overlooked as useful information. Because of language priors, a recognizer may assign different labels to identical characters. Directly comparing characters to each other, rather than only a model, helps ensure that similar instances receive the same label. Lexicons improve recognition accuracy but are used post hoc. We introduce a probabilistic model for STR that integrates similarity, language properties, and lexical decision. Inference is accelerated with sparse belief propagation, a bottom-up method for shortening messages by reducing the dependency between weakly supported hypotheses. By fusing information sources in one model, we eliminate unrecoverable errors that result from sequential processing, improving accuracy. In experimental results recognizing text from images of signs in outdoor scenes, incorporating similarity reduces character recognition error by 19%, the lexicon reduces word recognition error by 35%, and sparse belief propagation reduces the lexicon words considered by 99.9% with a 12X speedup and no loss in accuracy. PMID:19696446
Implementation of neural network for color properties of polycarbonates
NASA Astrophysics Data System (ADS)
Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.
2014-05-01
In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Chen; Wu, Hongfei; Li, Li
2014-09-02
We aim to understand the scale-dependent evolution of uranium bioreduction during a field experiment at a former uranium mill site near Rifle, Colorado. Acetate was injected to stimulate Fe-reducing bacteria (FeRB) and to immobilize aqueous U(VI) to insoluble U(IV). Bicarbonate was coinjected in half of the domain to mobilize sorbed U(VI). We used reactive transport modeling to integrate hydraulic and geochemical data and to quantify rates at the grid block (0.25 m) and experimental field scale (tens of meters). Although local rates varied by orders of magnitude in conjunction with biostimulation fronts propagating downstream, field-scale rates were dominated by thosemore » orders of magnitude higher rates at a few selected hot spots where Fe(III), U(VI), and FeRB were at their maxima in the vicinity of the injection wells. At particular locations, the hot moments with maximum rates negatively corresponded to their distance from the injection wells. Although bicarbonate injection enhanced local rates near the injection wells by a maximum of 39.4%, its effect at the field scale was limited to a maximum of 10.0%. We propose a rate-versus-measurement-length relationship (log R' = -0.63« less
Modeling Single-Event Transient Propagation in a SiGe BiCMOS Direct-Conversion Receiver
NASA Astrophysics Data System (ADS)
Ildefonso, Adrian; Song, Ickhyun; Tzintzarov, George N.; Fleetwood, Zachary E.; Lourenco, Nelson E.; Wachter, Mason T.; Cressler, John D.
2017-08-01
The propagation of single-event transient (SET) signals in a silicon-germanium direct-conversion receiver carrying modulated data is explored. A theoretical analysis of transient propagation, verified by simulation, is presented. A new methodology to characterize and quantify the impact of SETs in communication systems carrying modulated data is proposed. The proposed methodology uses a pulsed radiation source to induce distortions in the signal constellation. The error vector magnitude due to SETs can then be calculated to quantify errors. Two different modulation schemes were simulated: QPSK and 16-QAM. The distortions in the constellation diagram agree with the presented circuit theory. Furthermore, the proposed methodology was applied to evaluate the improvements in the SET response due to a known radiation-hardening-by-design (RHBD) technique, where the common-base device of the low-noise amplifier was operated in inverse mode. The proposed methodology can be a valid technique to determine the most sensitive parts of a system carrying modulated data.
NASA Astrophysics Data System (ADS)
Kemp, Z. D. C.
2018-04-01
Determining the phase of a wave from intensity measurements has many applications in fields such as electron microscopy, visible light optics, and medical imaging. Propagation based phase retrieval, where the phase is obtained from defocused images, has shown significant promise. There are, however, limitations in the accuracy of the retrieved phase arising from such methods. Sources of error include shot noise, image misalignment, and diffraction artifacts. We explore the use of artificial neural networks (ANNs) to improve the accuracy of propagation based phase retrieval algorithms applied to simulated intensity measurements. We employ a phase retrieval algorithm based on the transport-of-intensity equation to obtain the phase from simulated micrographs of procedurally generated specimens. We then train an ANN with pairs of retrieved and exact phases, and use the trained ANN to process a test set of retrieved phase maps. The total error in the phase is significantly reduced using this method. We also discuss a variety of potential extensions to this work.
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S.; Agarwal, Dev P.
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data. PMID:26366169
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems.
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S; Agarwal, Dev P
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data.
Geodesy by radio interferometry - Water vapor radiometry for estimation of the wet delay
NASA Technical Reports Server (NTRS)
Elgered, G.; Davis, J. L.; Herring, T. A.; Shapiro, I. I.
1991-01-01
An important source of error in VLBI estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. This paper presents and discusses the method of using data from a water vapor radiomete (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data or Kalman filtering to correct for atmospheric propagation delay at the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. For the most frequently measured baseline in this study, the use of WVR data yielded a 13 percent smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the 'best' minimum elevationi angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass.
Measuring contraction propagation and localizing pacemaker cells using high speed video microscopy
Akl, Tony J.; Nepiyushchikh, Zhanna V.; Gashev, Anatoliy A.; Zawieja, David C.; Coté, Gerard L.
2011-01-01
Previous studies have shown the ability of many lymphatic vessels to contract phasically to pump lymph. Every lymphangion can act like a heart with pacemaker sites that initiate the phasic contractions. The contractile wave propagates along the vessel to synchronize the contraction. However, determining the location of the pacemaker sites within these vessels has proven to be very difficult. A high speed video microscopy system with an automated algorithm to detect pacemaker location and calculate the propagation velocity, speed, duration, and frequency of the contractions is presented in this paper. Previous methods for determining the contractile wave propagation velocity manually were time consuming and subject to errors and potential bias. The presented algorithm is semiautomated giving objective results based on predefined criteria with the option of user intervention. The system was first tested on simulation images and then on images acquired from isolated microlymphatic mesenteric vessels. We recorded contraction propagation velocities around 10 mm∕s with a shortening speed of 20.4 to 27.1 μm∕s on average and a contraction frequency of 7.4 to 21.6 contractions∕min. The simulation results showed that the algorithm has no systematic error when compared to manual tracking. The system was used to determine the pacemaker location with a precision of 28 μm when using a frame rate of 300 frames per second. PMID:21361700
2008-09-30
propagation effects by splitting apart the longer period surface waves from the shorter period, depth-sensitive Pnl waves. Problematic, or high-error... Pnl waves. Problematic, or high-error, stations and paths were further analyzed to identify systematic errors with unknown sensor responses and...frequency Pnl components and slower, longer period surface waves. All cut windows are fit simultaneously, allowing equal weighting of phases that may be
NASA Technical Reports Server (NTRS)
Hasler, A. F.; Rodgers, E. B.
1977-01-01
An advanced Man-Interactive image and data processing system (AOIPS) was developed to extract basic meteorological parameters from satellite data and to perform further analyses. The errors in the satellite derived cloud wind fields for tropical cyclones are investigated. The propagation of these errors through the AOIPS system and their effects on the analysis of horizontal divergence and relative vorticity are evaluated.
Cross Section Sensitivity and Propagated Errors in HZE Exposures
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Wilson, John W.; Blatnig, Steve R.; Qualls, Garry D.; Badavi, Francis F.; Cucinotta, Francis A.
2005-01-01
It has long been recognized that galactic cosmic rays are of such high energy that they tend to pass through available shielding materials resulting in exposure of astronauts and equipment within space vehicles and habitats. Any protection provided by shielding materials result not so much from stopping such particles but by changing their physical character in interaction with shielding material nuclei forming, hopefully, less dangerous species. Clearly, the fidelity of the nuclear cross-sections is essential to correct specification of shield design and sensitivity to cross-section error is important in guiding experimental validation of cross-section models and database. We examine the Boltzmann transport equation which is used to calculate dose equivalent during solar minimum, with units (cSv/yr), associated with various depths of shielding materials. The dose equivalent is a weighted sum of contributions from neutrons, protons, light ions, medium ions and heavy ions. We investigate the sensitivity of dose equivalent calculations due to errors in nuclear fragmentation cross-sections. We do this error analysis for all possible projectile-fragment combinations (14,365 such combinations) to estimate the sensitivity of the shielding calculations to errors in the nuclear fragmentation cross-sections. Numerical differentiation with respect to the cross-sections will be evaluated in a broad class of materials including polyethylene, aluminum and copper. We will identify the most important cross-sections for further experimental study and evaluate their impact on propagated errors in shielding estimates.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
NASA Model of "Threat and Error" in Pediatric Cardiac Surgery: Patterns of Error Chains.
Hickey, Edward; Pham-Hung, Eric; Nosikova, Yaroslavna; Halvorsen, Fredrik; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Van Arsdell, Glen
2017-04-01
We introduced the National Aeronautics and Space Association threat-and-error model to our surgical unit. All admissions are considered flights, which should pass through stepwise deescalations in risk during surgical recovery. We hypothesized that errors significantly influence risk deescalation and contribute to poor outcomes. Patient flights (524) were tracked in real time for threats, errors, and unintended states by full-time performance personnel. Expected risk deescalation was wean from mechanical support, sternal closure, extubation, intensive care unit (ICU) discharge, and discharge home. Data were accrued from clinical charts, bedside data, reporting mechanisms, and staff interviews. Infographics of flights were openly discussed weekly for consensus. In 12% (64 of 524) of flights, the child failed to deescalate sequentially through expected risk levels; unintended increments instead occurred. Failed deescalations were highly associated with errors (426; 257 flights; p < 0.0001). Consequential errors (263; 173 flights) were associated with a 29% rate of failed deescalation versus 4% in flights with no consequential error (p < 0.0001). The most dangerous errors were apical errors typically (84%) occurring in the operating room, which caused chains of propagating unintended states (n = 110): these had a 43% (47 of 110) rate of failed deescalation (versus 4%; p < 0.0001). Chains of unintended state were often (46%) amplified by additional (up to 7) errors in the ICU that would worsen clinical deviation. Overall, failed deescalations in risk were extremely closely linked to brain injury (n = 13; p < 0.0001) or death (n = 7; p < 0.0001). Deaths and brain injury after pediatric cardiac surgery almost always occur from propagating error chains that originate in the operating room and are often amplified by additional ICU errors. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-01-01
A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
NASA Astrophysics Data System (ADS)
Brown, L. D.; Abdulaziz, R.; Tjaden, B.; Inman, D.; Brett, D. J. L.; Shearing, P. R.
2016-11-01
Reprocessing of spent nuclear fuels using molten salt media is an attractive alternative to liquid-liquid extraction techniques. Pyroelectrochemical processing utilizes direct, selective, electrochemical reduction of uranium dioxide, followed by selective electroplating of a uranium metal. Thermodynamic prediction of the electrochemical reduction of UO2 to U in LiCl-KCl eutectic has shown to be a function of the oxide ion activity. The pO2- of the salt may be affected by the microstructure of the UO2 electrode. A uranium dioxide filled "micro-bucket" electrode has been partially electroreduced to uranium metal in molten lithium chloride-potassium chloride eutectic. This partial electroreduction resulted in two distinct microstructures: a dense UO2 and a porous U metal structure were characterised by energy dispersive X-ray spectroscopy. Focused ion beam tomography was performed on five regions of this electrode which revealed an overall porosity ranging from 17.36% at the outer edge to 3.91% towards the centre, commensurate with the expected extent of reaction in each location. The pore connectivity was also seen to reduce from 88.32% to 17.86% in the same regions and the tortuosity through the sample was modelled along the axis of propagation of the electroreduction, which was seen to increase from a value of 4.42 to a value of infinity (disconnected pores). These microstructural characteristics could impede the transport of O2- ions resulting in a change in the local pO2- which could result in the inability to perform the electroreduction.
The use of propagation path corrections to improve regional seismic event location in western China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steck, L.K.; Cogbill, A.H.; Velasco, A.A.
1999-03-01
In an effort to improve the ability to locate seismic events in western China using only regional data, the authors have developed empirical propagation path corrections (PPCs) and applied such corrections using both traditional location routines as well as a nonlinear grid search method. Thus far, the authors have concentrated on corrections to observed P arrival times for shallow events using travel-time observations available from the USGS EDRs, the ISC catalogs, their own travel-tim picks from regional data, and data from other catalogs. They relocate events with the algorithm of Bratt and Bache (1988) from a region encompassing China. Formore » individual stations having sufficient data, they produce a map of the regional travel-time residuals from all well-located teleseismic events. From these maps, interpolated PPC surfaces have been constructed using both surface fitting under tension and modified Bayesian kriging. The latter method offers the advantage of providing well-behaved interpolants, but requires that the authors have adequate error estimates associated with the travel-time residuals. To improve error estimates for kriging and event location, they separate measurement error from modeling error. The modeling error is defined as the travel-time variance of a particular model as a function of distance, while the measurement error is defined as the picking error associated with each phase. They estimate measurement errors for arrivals from the EDRs based on roundoff or truncation, and use signal-to-noise for the travel-time picks from the waveform data set.« less
NASA Astrophysics Data System (ADS)
Gao, X.; Li, T.; Zhang, X.; Geng, X.
2018-04-01
In this paper, we proposed the stochastic model of InSAR height measurement by considering the interferometric geometry of InSAR height measurement. The model directly described the relationship between baseline error and height measurement error. Then the simulation analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of baseline error to height measurement. Furthermore, the whole emulation validation of InSAR stochastic model was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were fully evaluated.
Radio propagation through solar and other extraterrestrial ionized media
NASA Technical Reports Server (NTRS)
Smith, E. K.; Edelson, R. E.
1980-01-01
The present S- and X-band communications needs in deep space are addressed to illustrate the aspects which are affected by propagation through extraterrestrial plasmas. The magnitude, critical threshold, and frequency dependence of some eight propagation effects for an S-band propagation path passing within 4 solar radii of the Sun are described. The theory and observation of propagation in extraterrestrial plasmas are discussed and the various plasma states along a near solar propagation path are illustrated. Classical magnetoionic theory (cold anisotropic plasma) is examined for its applicability to the path in question. The characteristics of the plasma states found along the path are summarized and the errors in some of the standard approximations are indicated. Models of extraterrestrial plasmas are included. Modeling the electron density in the solar corona and solar wind, is emphasized but some cursory information on the terrestrial planets plus Jupiters is included.
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
Markstrom, Carol A; Charley, Perry H
2003-01-01
Disasters can be defined as catastrophic events that challenge the normal range of human coping ability. The technological/human-caused disaster, a classification of interest in this article, is attributable to human error or misjudgment. Lower socioeconomic status and race intersect in the heightened risk for technological/human-caused disasters among people of color. The experience of the Navajo with the uranium industry is argued to specifically be this type of a disaster with associated long-standing psychological impacts. The history of the Navajo with uranium mining and milling is reviewed with a discussion of the arduous efforts for compensation. The psychological impacts of this long-standing disaster among the Navajo are organized around major themes of: (a) human losses and bereavement, (b) environmental losses and contamination, (c) feelings of betrayal by government and mining and milling companies, (d) fears about current and future effects, (e) prolonged duration of psychological effects, (f) anxiety and depression, and (g) complicating factors of poverty and racism. The paper concludes with suggestions for culturally-appropriate education and intervention.
NASA Astrophysics Data System (ADS)
Griffiths, Trevor R.; Volkovich, Vladimir A.
An extensive review of the literature on the high temperature reactions (both in melts and in the solid state) of uranium oxides (UO 2, U 3O 8 and UO 3) resulting in the formation of insoluble alkali metal (Li to Cs) uranates is presented. Their uranate(VI) and uranate(V) compounds are examined, together with mixed and oxygen-deficient uranates. The reactions of uranium oxides with carbonates, oxides, per- and superoxides, chlorides, sulfates, nitrates and nitrites under both oxidising and non-oxidising conditions are critically examined and systematised, and the established compositions of a range of uranate(VI) and (V) compounds formed are discussed. Alkali metal uranates(VI) are examined in detail and their structural, physical, thermodynamic and spectroscopic properties considered. Chemical properties of alkali metal uranates(VI), including various methods for their reduction, are also reported. Errors in the current theoretical treatment of uranate(VI) spectra are identified and the need to develop routes for the preparation of single crystals is stressed.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Deviation diagnosis and analysis of hull flat block assembly based on a state space model
NASA Astrophysics Data System (ADS)
Zhang, Zhiying; Dai, Yinfang; Li, Zhen
2012-09-01
Dimensional control is one of the most important challenges in the shipbuilding industry. In order to predict assembly dimensional variation in hull flat block construction, a variation stream model based on state space was presented in this paper which can be further applied to accuracy control in shipbuilding. Part accumulative error, locating error, and welding deformation were taken into consideration in this model, and variation propagation mechanisms and the accumulative rule in the assembly process were analyzed. Then, a model was developed to describe the variation propagation throughout the assembly process. Finally, an example of flat block construction from an actual shipyard was given. The result shows that this method is effective and useful.
Skylab water balance error analysis
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1977-01-01
Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.
NASA Astrophysics Data System (ADS)
Ramanjaneyulu, P. S.; Sayi, Y. S.; Ramakumar, K. L.
2008-08-01
Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H 2O 2, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1 σ level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.
NASA Astrophysics Data System (ADS)
Roman, D. R.; Smith, D. A.
2017-12-01
In 2022, the National Geodetic Survey will replace all three NAD 83 reference frames with four new terrestrial reference frames. Each frame will be named after a tectonic plate (North American, Pacific, Caribbean and Mariana) and each will be related to the IGS frame through three Euler Pole parameters (EPPs). This talk will focus on three main areas of error propagation when defining coordinates in these four frames. Those areas are (1) use of the small angle approximation to relate true rotation about an Euler Pole to small rotations about three Cartesian axes (2) The current state of the art in determining the Euler Poles of these four plates and (3) the combination of both IGS Cartesian coordinate uncertainties and EPP uncertainties into coordinate uncertainties in the four new frames. Discussion will also include recent efforts at improving the Euler Poles for these frames and expected dates when errors in the EPPs will cause an unacceptable level of uncertainty in the four new terrestrial reference frames.
NASA Astrophysics Data System (ADS)
Liu, Jianjun; Kan, Jianquan
2018-04-01
In this paper, based on the terahertz spectrum, a new identification method of genetically modified material by support vector machine (SVM) based on affinity propagation clustering is proposed. This algorithm mainly uses affinity propagation clustering algorithm to make cluster analysis and labeling on unlabeled training samples, and in the iterative process, the existing SVM training data are continuously updated, when establishing the identification model, it does not need to manually label the training samples, thus, the error caused by the human labeled samples is reduced, and the identification accuracy of the model is greatly improved.
Numerical study of signal propagation in corrugated coaxial cables
Li, Jichun; Machorro, Eric A.; Shields, Sidney
2017-01-01
Our article focuses on high-fidelity modeling of signal propagation in corrugated coaxial cables. Taking advantage of the axisymmetry, the authors reduce the 3-D problem to a 2-D problem by solving time-dependent Maxwell's equations in cylindrical coordinates.They then develop a nodal discontinuous Galerkin method for solving their model equations. We prove stability and error analysis for the semi-discrete scheme. We we present our numerical results, we demonstrate that our algorithm not only converges as our theoretical analysis predicts, but it is also very effective in solving a variety of signal propagation problems in practical corrugated coaxial cables.
Kevin Schaefer; Christopher R. Schwalm; Chris Williams; M. Altaf Arain; Alan Barr; Jing M. Chen; Kenneth J. Davis; Dimitre Dimitrov; Timothy W. Hilton; David Y. Hollinger; Elyn Humphreys; Benjamin Poulter; Brett M. Raczka; Andrew D. Richardson; Alok Sahoo; Peter Thornton; Rodrigo Vargas; Hans Verbeeck; Ryan Anderson; Ian Baker; T. Andrew Black; Paul Bolstad; Jiquan Chen; Peter S. Curtis; Ankur R. Desai; Michael Dietze; Danilo Dragoni; Christopher Gough; Robert F. Grant; Lianhong Gu; Atul Jain; Chris Kucharik; Beverly Law; Shuguang Liu; Erandathie Lokipitiya; Hank A. Margolis; Roser Matamala; J. Harry McCaughey; Russ Monson; J. William Munger; Walter Oechel; Changhui Peng; David T. Price; Dan Ricciuto; William J. Riley; Nigel Roulet; Hanqin Tian; Christina Tonitto; Margaret Torn; Ensheng Weng; Xiaolu Zhou
2012-01-01
Accurately simulating gross primary productivity (GPP) in terrestrial ecosystem models is critical because errors in simulated GPP propagate through the model to introduce additional errors in simulated biomass and other fluxes. We evaluated simulated, daily average GPP from 26 models against estimated GPP at 39 eddy covariance flux tower sites across the United States...
An Evaluation of the Measurement Requirements for an In-Situ Wake Vortex Detection System
NASA Technical Reports Server (NTRS)
Fuhrmann, Henri D.; Stewart, Eric C.
1996-01-01
Results of a numerical simulation are presented to determine the feasibility of estimating the location and strength of a wake vortex from imperfect in-situ measurements. These estimates could be used to provide information to a pilot on how to avoid a hazardous wake vortex encounter. An iterative algorithm based on the method of secants was used to solve the four simultaneous equations describing the two-dimensional flow field around a pair of parallel counter-rotating vortices of equal and constant strength. The flow field information used by the algorithm could be derived from measurements from flow angle sensors mounted on the wing-tip of the detecting aircraft and an inertial navigation system. The study determined the propagated errors in the estimated location and strength of the vortex which resulted from random errors added to theoretically perfect measurements. The results are summarized in a series of charts and a table which make it possible to estimate these propagated errors for many practical situations. The situations include several generator-detector airplane combinations, different distances between the vortex and the detector airplane, as well as different levels of total measurement error.
Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.
Larsson, Elisabeth; Abrahamsson, Leif
2003-05-01
The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.
Integrated analysis of error detection and recovery
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1985-01-01
An integrated modeling and analysis of error detection and recovery is presented. When fault latency and/or error latency exist, the system may suffer from multiple faults or error propagations which seriously deteriorate the fault-tolerant capability. Several detection models that enable analysis of the effect of detection mechanisms on the subsequent error handling operations and the overall system reliability were developed. Following detection of the faulty unit and reconfiguration of the system, the contaminated processes or tasks have to be recovered. The strategies of error recovery employed depend on the detection mechanisms and the available redundancy. Several recovery methods including the rollback recovery are considered. The recovery overhead is evaluated as an index of the capabilities of the detection and reconfiguration mechanisms.
A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection
NASA Astrophysics Data System (ADS)
Ju, Kuanyu; Xiong, Hongkai
2014-11-01
To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.
NASA Astrophysics Data System (ADS)
GonzáLez, Pablo J.; FernáNdez, José
2011-10-01
Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.
NASA Technical Reports Server (NTRS)
Smith, G. A.
1975-01-01
The attitude of a spacecraft is determined by specifying independent parameters which relate the spacecraft axes to an inertial coordinate system. Sensors which measure angles between spin axis and other vectors directed to objects or fields external to the spacecraft are discussed. For the spin-stabilized spacecraft considered, the spin axis is constant over at least an orbit, but separate solutions based on sensor angle measurements are different due to propagation of errors. Sensor-angle solution methods are described which minimize the propagated errors by making use of least squares techniques over many sensor angle measurements and by solving explicitly (in closed form) for the spin axis coordinates. These methods are compared with star observation solutions to determine if satisfactory accuracy is obtained by each method.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
Analysis of the Effect of UTI-UTC to High Precision Orbit
NASA Astrophysics Data System (ADS)
Shin, Dongseok; Kwak, Sunghee; Kim, Tag-Gon
1999-12-01
As the spatial resolution of remote sensing satellites becomes higher, very accurate determination of the position of a LEO (Low Earth Orbit) satellite is demanding more than ever. Non-symmetric Earth gravity is the major perturbation force to LEO satellites. Since the orbit propagation is performed in the celestial frame while Earth gravity is defined in the terrestrial frame, it is required to convert the coordinates of the satellite from one to the other accurately. Unless the coordinate conversion between the two frames is performed accurately the orbit propagation calculates incorrect Earth gravitational force at a specific time instant, and hence, causes errors in orbit prediction. The coordinate conversion between the two frames involves precession, nutation, Earth rotation and polar motion. Among these factors, unpredictability and uncertainty of Earth rotation, called UTI-UTC, is the largest error source. In this paper, the effect of UTI-UTC on the accuracy of the LEO propagation is introduced, tested and analzed. Considering the maximum unpredictability of UTI-UTC, 0.9 seconds, the meaningful order of non-spherical Earth harmonic functions is derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.
2010-09-15
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less
Comparison of actinide production in traveling wave and pressurized water reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborne, A.G.; Smith, T.A.; Deinert, M.R.
The geopolitical problems associated with civilian nuclear energy production arise in part from the accumulation of transuranics in spent nuclear fuel. A traveling wave reactor is a type of breed-burn reactor that could, if feasible, reduce the overall production of transuranics. In one possible configuration, a cylinder of natural or depleted uranium would be subjected to a fast neutron flux at one end. The neutrons would transmute the uranium, producing plutonium and higher actinides. Under the right conditions, the reactor could become critical, at which point a self-stabilizing fission wave would form and propagate down the length of the reactormore » cylinder. The neutrons from the fission wave would burn the fissile nuclides and transmute uranium ahead of the wave to produce additional fuel. Fission waves in uranium are driven largely by the production and fission of {sup 239}Pu. Simulations have shown that the fuel burnup can reach values greater than 400 MWd/kgIHM, before fission products poison the reaction. In this work we compare the production of plutonium and minor actinides produced in a fission wave to that of a UOX fueled light water reactor, both on an energy normalized basis. The nuclide concentrations in the spent traveling wave reactor fuel are computed using a one-group diffusion model and are verified using Monte Carlo simulations. In the case of the pressurized water reactor, a multi-group collision probability model is used to generate the nuclide quantities. We find that the traveling wave reactor produces about 0.187 g/MWd/kgIHM of transuranics compared to 0.413 g/MWd/kgIHM for a pressurized water reactor running fuel enriched to 4.95 % and burned to 50 MWd/kgIHM. (authors)« less
Huang, Yishun; Fang, Luting; Zhu, Zhi; Ma, Yanli; Zhou, Leiji; Chen, Xi; Xu, Dunming; Yang, Chaoyong
2016-11-15
Due to uranium's increasing exploitation in nuclear energy and its toxicity to human health, it is of great significance to detect uranium contamination. In particular, development of a rapid, sensitive and portable method is important for personal health care for those who frequently come into contact with uranium ore mining or who investigate leaks at nuclear power plants. The most stable form of uranium in water is uranyl ion (UO2(2+)). In this work, a UO2(2+) responsive smart hydrogel was designed and synthesized for rapid, portable, sensitive detection of UO2(2+). A UO2(2+) dependent DNAzyme complex composed of substrate strand and enzyme strand was utilized to crosslink DNA-grafted polyacrylamide chains to form a DNA hydrogel. Colorimetric analysis was achieved by encapsulating gold nanoparticles (AuNPs) in the DNAzyme-crosslinked hydrogel to indicate the concentration of UO2(2+). Without UO2(2+), the enzyme strand is not active. The presence of UO2(2+) in the sample activates the enzyme strand and triggers the cleavage of the substrate strand from the enzyme strand, thereby decreasing the density of crosslinkers and destabilizing the hydrogel, which then releases the encapsulated AuNPs. As low as 100nM UO2(2+) was visually detected by the naked eye. The target-responsive hydrogel was also demonstrated to be applicable in natural water spiked with UO2(2+). Furthermore, to avoid the visual errors caused by naked eye observation, a previously developed volumetric bar-chart chip (V-Chip) was used to quantitatively detect UO2(2+) concentrations in water by encapsulating Au-Pt nanoparticles in the hydrogel. The UO2(2+) concentrations were visually quantified from the travelling distance of ink-bar on the V-Chip. The method can be used for portable and quantitative detection of uranium in field applications without skilled operators and sophisticated instruments. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Karami, Behrouz; Janghorban, Maziar; Li, Li
2018-03-01
We found a proofing error existing in the affiliation of the first and second authors of our article [1], We found a proofing error existing in the affiliation of the first and second authors of our article [1]. The correct affiliation should be "Department of Mechanical Engineering, Marvdasht Branch, Islamic Azad University, Marvdasht, Iran".
The propagation of wind errors through ocean wave hindcasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holthuijsen, L.H.; Booij, N.; Bertotti, L.
1996-08-01
To estimate uncertainties in wave forecast and hindcasts, computations have been carried out for a location in the Mediterranean Sea using three different analyses of one historic wind field. These computations involve a systematic sensitivity analysis and estimated wind field errors. This technique enables a wave modeler to estimate such uncertainties in other forecasts and hindcasts if only one wind analysis is available.
Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija
2018-01-01
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918
Conditions for the optical wireless links bit error ratio determination
NASA Astrophysics Data System (ADS)
Kvíčala, Radek
2017-11-01
To determine the quality of the Optical Wireless Links (OWL), there is necessary to establish the availability and the probability of interruption. This quality can be defined by the optical beam bit error rate (BER). Bit error rate BER presents the percentage of successfully transmitted bits. In practice, BER runs into the problem with the integration time (measuring time) determination. For measuring and recording of BER at OWL the bit error ratio tester (BERT) has been developed. The 1 second integration time for the 64 kbps radio links is mentioned in the accessible literature. However, it is impossible to use this integration time for singularity of coherent beam propagation.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Multipath induced errors in meteorological Doppler/interferometer location systems
NASA Technical Reports Server (NTRS)
Wallace, R. G.
1984-01-01
One application of an RF interferometer aboard a low-orbiting spacecraft to determine the location of ground-based transmitters is in tracking high-altitude balloons for meteorological studies. A source of error in this application is reflection of the signal from the sea surface. Through propagating and signal analysis, the magnitude of the reflection-induced error in both Doppler frequency measurements and interferometer phase measurements was estimated. The theory of diffuse scattering from random surfaces was applied to obtain the power spectral density of the reflected signal. The processing of the combined direct and reflected signals was then analyzed to find the statistics of the measurement error. It was found that the error varies greatly during the satellite overpass and attains its maximum value at closest approach. The maximum values of interferometer phase error and Doppler frequency error found for the system configuration considered were comparable to thermal noise-induced error.
Cache-based error recovery for shared memory multiprocessor systems
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1989-01-01
A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.
Error analysis in inverse scatterometry. I. Modeling.
Al-Assaad, Rayan M; Byrne, Dale M
2007-02-01
Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.
Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.; Park, Michael A.
2017-01-01
The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiangyu; Shi, Xianbo; Wang, Yong
The mutual optical intensity (MOI) model is extended to include the propagation of partially coherent radiation through non-ideal mirrors. The propagation of the MOI from the incident to the exit plane of the mirror is realised by local ray tracing. The effects of figure errors can be expressed as phase shifts obtained by either the phase projection approach or the direct path length method. Using the MOI model, the effects of figure errors are studied for diffraction-limited cases using elliptical cylinder mirrors. Figure errors with low spatial frequencies can vary the intensity distribution, redistribute the local coherence function and distortmore » the wavefront, but have no effect on the global degree of coherence. The MOI model is benchmarked againstHYBRIDand the multi-electronSynchrotron Radiation Workshop(SRW) code. The results show that the MOI model gives accurate results under different coherence conditions of the beam. Other than intensity profiles, the MOI model can also provide the wavefront and the local coherence function at any location along the beamline. The capability of tuning the trade-off between accuracy and efficiency makes the MOI model an ideal tool for beamline design and optimization.« less
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351
Optimal information transfer in enzymatic networks: A field theoretic formulation
NASA Astrophysics Data System (ADS)
Samanta, Himadri S.; Hinczewski, Michael; Thirumalai, D.
2017-07-01
Signaling in enzymatic networks is typically triggered by environmental fluctuations, resulting in a series of stochastic chemical reactions, leading to corruption of the signal by noise. For example, information flow is initiated by binding of extracellular ligands to receptors, which is transmitted through a cascade involving kinase-phosphatase stochastic chemical reactions. For a class of such networks, we develop a general field-theoretic approach to calculate the error in signal transmission as a function of an appropriate control variable. Application of the theory to a simple push-pull network, a module in the kinase-phosphatase cascade, recovers the exact results for error in signal transmission previously obtained using umbral calculus [Hinczewski and Thirumalai, Phys. Rev. X 4, 041017 (2014), 10.1103/PhysRevX.4.041017]. We illustrate the generality of the theory by studying the minimal errors in noise reduction in a reaction cascade with two connected push-pull modules. Such a cascade behaves as an effective three-species network with a pseudointermediate. In this case, optimal information transfer, resulting in the smallest square of the error between the input and output, occurs with a time delay, which is given by the inverse of the decay rate of the pseudointermediate. Surprisingly, in these examples the minimum error computed using simulations that take nonlinearities and discrete nature of molecules into account coincides with the predictions of a linear theory. In contrast, there are substantial deviations between simulations and predictions of the linear theory in error in signal propagation in an enzymatic push-pull network for a certain range of parameters. Inclusion of second-order perturbative corrections shows that differences between simulations and theoretical predictions are minimized. Our study establishes that a field theoretic formulation of stochastic biological signaling offers a systematic way to understand error propagation in networks of arbitrary complexity.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-12-18
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.
Analysis of the PLL phase error in presence of simulated ionospheric scintillation events
NASA Astrophysics Data System (ADS)
Forte, B.
2012-01-01
The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).
Atmospheric microwave refractivity and refraction
NASA Technical Reports Server (NTRS)
Yu, E.; Hodge, D. B.
1980-01-01
The atmospheric refractivity can be expressed as a function of temperature, pressure, water vapor content, and operating frequency. Based on twenty-year meteorological data, statistics of the atmospheric refractivity were obtained. These statistics were used to estimate the variation of dispersion, attenuation, and refraction effects on microwave and millimeter wave signals propagating along atmospheric paths. Bending angle, elevation angle error, and range error were also developed for an exponentially tapered, spherical atmosphere.
GUM Analysis for TIMS and SIMS Isotopic Ratios in Graphite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.; Gerlach, David C.; Cliff, John B.
2007-04-01
This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
Alastruey, Jordi; Hunt, Anthony A E; Weinberg, Peter D
2014-01-01
We present a novel analysis of arterial pulse wave propagation that combines traditional wave intensity analysis with identification of Windkessel pressures to account for the effect on the pressure waveform of peripheral wave reflections. Using haemodynamic data measured in vivo in the rabbit or generated numerically in models of human compliant vessels, we show that traditional wave intensity analysis identifies the timing, direction and magnitude of the predominant waves that shape aortic pressure and flow waveforms in systole, but fails to identify the effect of peripheral reflections. These reflections persist for several cardiac cycles and make up most of the pressure waveform, especially in diastole and early systole. Ignoring peripheral reflections leads to an erroneous indication of a reflection-free period in early systole and additional error in the estimates of (i) pulse wave velocity at the ascending aorta given by the PU–loop method (9.5% error) and (ii) transit time to a dominant reflection site calculated from the wave intensity profile (27% error). These errors decreased to 1.3% and 10%, respectively, when accounting for peripheral reflections. Using our new analysis, we investigate the effect of vessel compliance and peripheral resistance on wave intensity, peripheral reflections and reflections originating in previous cardiac cycles. PMID:24132888
Architectural elements of hybrid navigation systems for future space transportation
NASA Astrophysics Data System (ADS)
Trigo, Guilherme F.; Theil, Stephan
2018-06-01
The fundamental limitations of inertial navigation, currently employed by most launchers, have raised interest for GNSS-aided solutions. Combination of inertial measurements and GNSS outputs allows inertial calibration online, solving the issue of inertial drift. However, many challenges and design options unfold. In this work we analyse several architectural elements and design aspects of a hybrid GNSS/INS navigation system conceived for space transportation. The most fundamental architectural features such as coupling depth, modularity between filter and inertial propagation, and open-/closed-loop nature of the configuration, are discussed in the light of the envisaged application. Importance of the inertial propagation algorithm and sensor class in the overall system are investigated, being the handling of sensor errors and uncertainties that arise with lower grade sensory also considered. In terms of GNSS outputs we consider receiver solutions (position and velocity) and raw measurements (pseudorange, pseudorange-rate and time-difference carrier phase). Receiver clock error handling options and atmospheric error correction schemes for these measurements are analysed under flight conditions. System performance with different GNSS measurements is estimated through covariance analysis, being the differences between loose and tight coupling emphasized through partial outage simulation. Finally, we discuss options for filter algorithm robustness against non-linearities and system/measurement errors. A possible scheme for fault detection, isolation and recovery is also proposed.
The High-Resolution Wave-Propagation Method Applied to Meso- and Micro-Scale Flows
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.
2012-01-01
The high-resolution wave-propagation method for computing the nonhydrostatic atmospheric flows on meso- and micro-scales is described. The design and implementation of the Riemann solver used for computing the Godunov fluxes is discussed in detail. The method uses a flux-based wave decomposition in which the flux differences are written directly as the linear combination of the right eigenvectors of the hyperbolic system. The two advantages of the technique are: 1) the need for an explicit definition of the Roe matrix is eliminated and, 2) the inclusion of source term due to gravity does not result in discretization errors. The resulting flow solver is conservative and able to resolve regions of large gradients without introducing dispersion errors. The methodology is validated against exact analytical solutions and benchmark cases for non-hydrostatic atmospheric flows.
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
Error field penetration and locking to the backward propagating wave
Finn, John M.; Cole, Andrew J.; Brennan, Dylan P.
2015-12-30
In this letter we investigate error field penetration, or locking, behavior in plasmas having stable tearing modes with finite real frequencies w r in the plasma frame. In particular, we address the fact that locking can drive a significant equilibrium flow. We show that this occurs at a velocity slightly above v = w r/k, corresponding to the interaction with a backward propagating tearing mode in the plasma frame. Results are discussed for a few typical tearing mode regimes, including a new derivation showing that the existence of real frequencies occurs for viscoresistive tearing modes, in an analysis including themore » effects of pressure gradient, curvature and parallel dynamics. The general result of locking to a finite velocity flow is applicable to a wide range of tearing mode regimes, indeed any regime where real frequencies occur.« less
Shi, Xianbo; Reininger, Ruben; Sanchez del Rio, Manuel; ...
2014-05-15
A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The 'Hybrid Method' computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared withSHADOWresults pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version ofSRWin one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the codemore » is considerably faster than the multi-electron version ofSRWand is therefore a useful tool for beamline design and optimization.« less
NASA Astrophysics Data System (ADS)
Swastika, Windra
2017-03-01
A money's nominal value recognition system has been developed using Artificial Neural Network (ANN). ANN with Back Propagation has one disadvantage. The learning process is very slow (or never reach the target) in the case of large number of iteration, weight and samples. One way to speed up the learning process is using Quickprop method. Quickprop method is based on Newton's method and able to speed up the learning process by assuming that the weight adjustment (E) is a parabolic function. The goal is to minimize the error gradient (E'). In our system, we use 5 types of money's nominal value, i.e. 1,000 IDR, 2,000 IDR, 5,000 IDR, 10,000 IDR and 50,000 IDR. One of the surface of each nominal were scanned and digitally processed. There are 40 patterns to be used as training set in ANN system. The effectiveness of Quickprop method in the ANN system was validated by 2 factors, (1) number of iterations required to reach error below 0.1; and (2) the accuracy to predict nominal values based on the input. Our results shows that the use of Quickprop method is successfully reduce the learning process compared to Back Propagation method. For 40 input patterns, Quickprop method successfully reached error below 0.1 for only 20 iterations, while Back Propagation method required 2000 iterations. The prediction accuracy for both method is higher than 90%.
Performance Evaluation of Spectroscopic Detectors for LEU Hold-up Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkataraman, Ramkumar; Nutter, Greg; McElroy, Robert Dennis
The hold-up measurement of low-enriched uranium materials may require use of alternate detector types relative to the measurement of highly enriched uranium. This is in part due to the difference in process scale (i.e., the components are generally larger for low-enriched uranium systems), but also because the characteristic gamma-ray lines from 235U used for assay of highly enriched uranium will be present at a much reduced intensity (on a per gram of uranium basis) at lower enrichments. Researchers at Oak Ridge National Laboratory examined the performance of several standard detector types, e.g., NaI(Tl), LaBr3(Ce), and HPGe, to select a suitablemore » candidate for measuring and quantifying low-enriched uranium hold-up in process pipes and equipment at the Portsmouth gaseous diffusion plant. Detector characteristics, such as energy resolution (full width at half maximum) and net peak count rates at gamma ray energies spanning a range of 60–1332 keV, were measured for the above-mentioned detector types using the same sources and in the same geometry. Uranium enrichment standards (Certified Reference Material no. 969 and Certified Reference Material no. 146) were measured using each of the detector candidates in the same geometry. The net count rates recorded by each detector at 186 keV and 1,001 keV were plotted as a function of enrichment (atom percentage). Background measurements were made in unshielded and shielded configurations under both ambient and elevated conditions of 238U activity. The highly enriched uranium hold-up measurement campaign at the Portsmouth plant was performed on process equipment that had been cleaned out. Therefore, in most cases, the thickness of the uranium deposits was less than the “infinite thickness” for the 186 keV gamma rays to be completely self-attenuated. Because of this, in addition to measuring the 186 keV gamma, the 1,001 keV gamma ray from 234mPa—a daughter of 238U in secular equilibrium with its parent—will also need to be measured. Based on the performance criteria of detection efficiency, energy resolution, peak-to-continuum ratios, minimum detectable limits, and the weight of the shielded probe, a shielded (0.5 in. thick lead shield) 2 × 2 in. NaI(Tl) detector is recommended for use. The recommended approach is to carry out analysis using data from both 186 keV and 1,001 keV gamma rays, and select a best result based on propagated uncertainty estimates. It is also highly recommended that a two-point gain stabilization scheme based on an 241Am seed embedded in the probe be implemented. Shielding configurations to reduce the impact of background interference on the measurement of 1,001 keV gamma-ray are discussed.« less
NASA Astrophysics Data System (ADS)
Sinha, T.; Arumugam, S.
2012-12-01
Seasonal streamflow forecasts contingent on climate forecasts can be effectively utilized in updating water management plans and optimize generation of hydroelectric power. Streamflow in the rainfall-runoff dominated basins critically depend on forecasted precipitation in contrast to snow dominated basins, where initial hydrological conditions (IHCs) are more important. Since precipitation forecasts from Atmosphere-Ocean-General Circulation Models are available at coarse scale (~2.8° by 2.8°), spatial and temporal downscaling of such forecasts are required to implement land surface models, which typically runs on finer spatial and temporal scales. Consequently, multiple sources are introduced at various stages in predicting seasonal streamflow. Therefore, in this study, we addresses the following science questions: 1) How do we attribute the errors in monthly streamflow forecasts to various sources - (i) model errors, (ii) spatio-temporal downscaling, (iii) imprecise initial conditions, iv) no forecasts, and (iv) imprecise forecasts? and 2) How does monthly streamflow forecast errors propagate with different lead time over various seasons? In this study, the Variable Infiltration Capacity (VIC) model is calibrated over Apalachicola River at Chattahoochee, FL in the southeastern US and implemented with observed 1/8° daily forcings to estimate reference streamflow during 1981 to 2010. The VIC model is then forced with different schemes under updated IHCs prior to forecasting period to estimate relative mean square errors due to: a) temporally disaggregation, b) spatial downscaling, c) Reverse Ensemble Streamflow Prediction (imprecise IHCs), d) ESP (no forecasts), and e) ECHAM4.5 precipitation forecasts. Finally, error propagation under different schemes are analyzed with different lead time over different seasons.
Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita
2013-03-01
Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.
2016-02-23
Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less
Estimate of higher order ionospheric errors in GNSS positioning
NASA Astrophysics Data System (ADS)
Hoque, M. Mainul; Jakowski, N.
2008-10-01
Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.
Production of Low Enriched Uranium Nitride Kernels for TRISO Particle Irradiation Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
McMurray, J. W.; Silva, C. M.; Helmreich, G. W.
2016-06-01
A large batch of UN microspheres to be used as kernels for TRISO particle fuel was produced using carbothermic reduction and nitriding of a sol-gel feedstock bearing tailored amounts of low-enriched uranium (LEU) oxide and carbon. The process parameters, established in a previous study, produced phasepure NaCl structure UN with dissolved C on the N sublattice. The composition, calculated by refinement of the lattice parameter from X-ray diffraction, was determined to be UC 0.27N 0.73. The final accepted product weighed 197.4 g. The microspheres had an average diameter of 797±1.35 μm and a composite mean theoretical density of 89.9±0.5% formore » a solid solution of UC and UN with the same atomic ratio; both values are reported with their corresponding calculated standard error.« less
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Performance and evaluation of real-time multicomputer control systems
NASA Technical Reports Server (NTRS)
Shin, K. G.
1985-01-01
Three experiments on fault tolerant multiprocessors (FTMP) were begun. They are: (1) measurement of fault latency in FTMP; (2) validation and analysis of FTMP synchronization protocols; and investigation of error propagation in FTMP.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1992-01-01
Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.
Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C
2014-03-01
Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.
Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed
2017-11-01
The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.
Reach and speed of judgment propagation in the laboratory.
Moussaïd, Mehdi; Herzog, Stefan M; Kämmer, Juliane E; Hertwig, Ralph
2017-04-18
In recent years, a large body of research has demonstrated that judgments and behaviors can propagate from person to person. Phenomena as diverse as political mobilization, health practices, altruism, and emotional states exhibit similar dynamics of social contagion. The precise mechanisms of judgment propagation are not well understood, however, because it is difficult to control for confounding factors such as homophily or dynamic network structures. We introduce an experimental design that renders possible the stringent study of judgment propagation. In this design, experimental chains of individuals can revise their initial judgment in a visual perception task after observing a predecessor's judgment. The positioning of a very good performer at the top of a chain created a performance gap, which triggered waves of judgment propagation down the chain. We evaluated the dynamics of judgment propagation experimentally. Despite strong social influence within pairs of individuals, the reach of judgment propagation across a chain rarely exceeded a social distance of three to four degrees of separation. Furthermore, computer simulations showed that the speed of judgment propagation decayed exponentially with the social distance from the source. We show that information distortion and the overweighting of other people's errors are two individual-level mechanisms hindering judgment propagation at the scale of the chain. Our results contribute to the understanding of social-contagion processes, and our experimental method offers numerous new opportunities to study judgment propagation in the laboratory.
Reach and speed of judgment propagation in the laboratory
Herzog, Stefan M.; Kämmer, Juliane E.; Hertwig, Ralph
2017-01-01
In recent years, a large body of research has demonstrated that judgments and behaviors can propagate from person to person. Phenomena as diverse as political mobilization, health practices, altruism, and emotional states exhibit similar dynamics of social contagion. The precise mechanisms of judgment propagation are not well understood, however, because it is difficult to control for confounding factors such as homophily or dynamic network structures. We introduce an experimental design that renders possible the stringent study of judgment propagation. In this design, experimental chains of individuals can revise their initial judgment in a visual perception task after observing a predecessor’s judgment. The positioning of a very good performer at the top of a chain created a performance gap, which triggered waves of judgment propagation down the chain. We evaluated the dynamics of judgment propagation experimentally. Despite strong social influence within pairs of individuals, the reach of judgment propagation across a chain rarely exceeded a social distance of three to four degrees of separation. Furthermore, computer simulations showed that the speed of judgment propagation decayed exponentially with the social distance from the source. We show that information distortion and the overweighting of other people’s errors are two individual-level mechanisms hindering judgment propagation at the scale of the chain. Our results contribute to the understanding of social-contagion processes, and our experimental method offers numerous new opportunities to study judgment propagation in the laboratory. PMID:28373540
Generalized fourier analyses of the advection-diffusion equation - Part II: two-dimensional domains
NASA Astrophysics Data System (ADS)
Voth, Thomas E.; Martinez, Mario J.; Christon, Mark A.
2004-07-01
Part I of this work presents a detailed multi-methods comparison of the spatial errors associated with the one-dimensional finite difference, finite element and finite volume semi-discretizations of the scalar advection-diffusion equation. In Part II we extend the analysis to two-dimensional domains and also consider the effects of wave propagation direction and grid aspect ratio on the phase speed, and the discrete and artificial diffusivities. The observed dependence of dispersive and diffusive behaviour on propagation direction makes comparison of methods more difficult relative to the one-dimensional results. For this reason, integrated (over propagation direction and wave number) error and anisotropy metrics are introduced to facilitate comparison among the various methods. With respect to these metrics, the consistent mass Galerkin and consistent mass control-volume finite element methods, and their streamline upwind derivatives, exhibit comparable accuracy, and generally out-perform their lumped mass counterparts and finite-difference based schemes. While this work can only be considered a first step in a comprehensive multi-methods analysis and comparison, it serves to identify some of the relative strengths and weaknesses of multiple numerical methods in a common mathematical framework. Published in 2004 by John Wiley & Sons, Ltd.
Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît
2016-01-01
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
GUM Analysis for SIMS Isotopic Ratios in BEP0 Graphite Qualification Samples, Round 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, David C.; Heasler, Patrick G.; Reid, Bruce D.
2009-01-01
This report describes GUM calculations for TIMS and SIMS isotopic ratio measurements of reactor graphite samples. These isotopic ratios are used to estimate reactor burn-up, and currently consist of various ratios of U, Pu, and Boron impurities in the graphite samples. The GUM calculation is a propagation of error methodology that assigns uncertainties (in the form of standard error and confidence bound) to the final estimates.
Fission cross section of 239Th and 232Th relative to 235U
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meadows, J. W.
1979-01-01
The fission cross sections of /sup 230/Th and /sup 232/Th were measured relative to /sup 235/U from near threshold to near 10 MeV. The weights of the thorium samples were determined by isotopic dilution. The weight of the uranium deposit was based on specific activity measurements of a /sup 234/U-/sup 235/U mixture and low geometry alpha counting. Corrections were made for thermal background, loss of fragments in the deposits, neutron scattering in the detector assembly, sample geometry, sample composition and the spectrum of the neutron source. Generally the systematic errors were approx. 1%. The combined systematic and statistical errors weremore » typically 1.5%. 17 references.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swift, Alicia L; Grogan, Brandon R; Mullens, James Allen
This work tests a systematic procedure for analyzing data acquired by the Nuclear Materials Identification System (NMIS) at Oak Ridge National Laboratory with fast-neutron imaging and high-purity germanium (HPGe) gamma spectrometry capabilities. NMIS has been under development by the US Department of Energy Office of Nuclear Verification since the mid-1990s, and prior to that by the National Nuclear Security Administration Y-12 National Security Complex, with NMIS having been used at Y-12 for template matching to confirm inventory and receipts. In this present work, a complete set of NMIS time coincidence, fast-neutron imaging, fission mapping, and HPGe gamma-ray spectrometry data wasmore » obtained from Monte Carlo simulations for a configuration of fissile and nonfissile materials. The data were then presented for analysis to someone who had no prior knowledge of the unknown object to accurately determine the description of the object by applying the previously-mentioned procedure to the simulated data. The best approximation indicated that the unknown object was composed of concentric cylinders: a void inside highly enriched uranium (HEU) (84.7 {+-} 1.9 wt % {sup 235}U), surrounded by depleted uranium, surrounded by polyethylene. The final estimation of the unknown object had the correct materials and geometry, with error in the radius estimates of material regions varying from 1.58% at best and 4.25% at worst; error in the height estimates varied from 2% to 12%. The error in the HEU enrichment estimate was 5.9 wt % (within 2.5{sigma} of the true value). The accuracies of the determinations could be adequate for arms control applications. Future work will apply this iterative reconstructive procedure to other unknown objects to further test and refine it.« less
Development of Precise Lunar Orbit Propagator and Lunar Polar Orbiter's Lifetime Analysis
NASA Astrophysics Data System (ADS)
Song, Young-Joo; Park, Sang-Young; Kim, Hae-Dong; Sim, Eun-Sup
2010-06-01
To prepare for a Korean lunar orbiter mission, a precise lunar orbit propagator; Yonsei precise lunar orbit propagator (YSPLOP) is developed. In the propagator, accelerations due to the Moon's non-spherical gravity, the point masses of the Earth, Moon, Sun, Mars, Jupiter and also, solar radiation pressures can be included. The developed propagator's performance is validated and propagation errors between YSPOLP and STK/Astrogator are found to have about maximum 4-m, in along-track direction during 30 days (Earth's time) of propagation. Also, it is found that the lifetime of a lunar polar orbiter is strongly affected by the different degrees and orders of the lunar gravity model, by a third body's gravitational attractions (especially the Earth), and by the different orbital inclinations. The reliable lifetime of circular lunar polar orbiter at about 100 km altitude is estimated to have about 160 days (Earth's time). However, to estimate the reasonable lifetime of circular lunar polar orbiter at about 100 km altitude, it is strongly recommended to consider at least 50 × 50 degrees and orders of the lunar gravity field. The results provided in this paper are expected to make further progress in the design fields of Korea's lunar orbiter missions.
1991-05-01
Hall, 1967. 6. Rosenblatt, F., Principles of Neurodynamics , Spartan Books, 1962. 7. Minsky, M. and Papert, S., Perceptrons, MIT Press, Revised Edition...sentations by Error Propagation, Rumelhart and McClelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition , Vol
Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Y.; Song, Y.; Lu, J.
2018-05-01
Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Data vs. information: A system paradigm
NASA Technical Reports Server (NTRS)
Billingsley, F. C.
1982-01-01
The data system designer requires data parameters, and is dependent on the user to convert information needs to these data parameters. This conversion will be done with more or less accuracy, beginning a chain of inaccuracies which propagate through the system, and which, in the end, may prevent the user from converting the data received into the information required. The concept to be pursued is that errors occur in various parts of the system, and, having occurred, propagate to the end. Modeling of the system may allow an estimation of the effects at any point and the final accumulated effect, and may prove a method of allocating an error budget among the system components. The selection of the various technical parameters which a data system must meet must be done in relation to the ability of the user to turn the cold, impersonal data into a live, personal decision or piece of information.
Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca
NASA Astrophysics Data System (ADS)
Matteo, N. A.; Morton, Y. T.
2010-12-01
The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.
NASA Astrophysics Data System (ADS)
Marbach, T.; Mangini, A.; Kober, B.; Schleicher, A.; Warr, L. N.
2003-04-01
Major and trace element analyses allow to obtain information concerning the chemical changes induced by alteration. Differences are partly petrographic because the profile crosses the granite-rhyolite contact, but they are also due to different alteration levels induced by fluid circulation along the fault system which has drained the alteration processes. The granite-rhyolite contact constitutes the primary structure. Only the most incompatible elements (Si, Al, Zr, Hf) retain their original signatures and reflect a mixing between typical granite and rhyolite lithologies across the altered zones (cataclasite). The more mobile elements show a different composition within the altered zones (cataclasite) notably a high leaching of cations. The geochemical tracers also suggest at least one strong hydrothermal event with reducing conditions in the altered zones. The isotopic analyses delivered qualitative and temporal information. The use of several isotopic systems, Rb/Sr-, U/Pb-isotopes and Th/U disequilibria, reveals a complex history of polyphase fluid/rock interaction following the Permian volcanic extrusion, showing notable disturbances during the late Jurassic hydrothermal activities, the Tertiary rifting of the Rhine Graben and more recent Quaternary alteration. The granite zone of the sampling profile has underwent an event which set up a new Rb-Sr isotopic composition and reset the Rb/Sr system which originatly corresponded to the Carboniferous intrusion ages. The Rb-Sr data of the granite samples produce a whole rock isochron of 152 ± 5,7 Ma (2σ error) in good agreement with the well-known late Jurassic hydrothermal event (135--160 Ma). The rocks evolution lines for Pb support a Tertiary hydrothermal event (54 Ma ± 16; 1σ error), potentially connected with the development of the Rhine Graben. The profile samples have undergone uranium and thorium redistribution processes which have occurred within the last ˜10^6 years. The samples of the altered zones record a more complex history of uranium exchange with the aqueous phase. This uranium exchange is proportional to the porosity. The best approximation is reached for an exchange coefficient (λ_E) for uranium ranging from 2,5 E-06 [a-1] in the middle of the altered zones to 2,5 E-05 [a-1] on the sides of the altered zones.
Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan
2017-01-01
In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.
Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elgered, G.; Davis, J.L.; Herring, T.A.
1991-04-10
An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less
Nguyen, Kieu T H; Adamkiewicz, Marta A; Hebert, Lauren E; Zygiel, Emily M; Boyle, Holly R; Martone, Christina M; Meléndez-Ríos, Carola B; Noren, Karen A; Noren, Christopher J; Hall, Marilena Fitzsimons
2014-10-01
A target-unrelated peptide (TUP) can arise in phage display selection experiments as a result of a propagation advantage exhibited by the phage clone displaying the peptide. We previously characterized HAIYPRH, from the M13-based Ph.D.-7 phage display library, as a propagation-related TUP resulting from a G→A mutation in the Shine-Dalgarno sequence of gene II. This mutant was shown to propagate in Escherichia coli at a dramatically faster rate than phage bearing the wild-type Shine-Dalgarno sequence. We now report 27 additional fast-propagating clones displaying 24 different peptides and carrying 14 unique mutations. Most of these mutations are found either in or upstream of the gene II Shine-Dalgarno sequence, but still within the mRNA transcript of gene II. All 27 clones propagate at significantly higher rates than normal library phage, most within experimental error of wild-type M13 propagation, suggesting that mutations arise to compensate for the reduced virulence caused by the insertion of a lacZα cassette proximal to the replication origin of the phage used to construct the library. We also describe an efficient and convenient assay to diagnose propagation-related TUPS among peptide sequences selected by phage display. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdisov, A. A.; Runde, W.; Williams-Jones, A. E.
We welcome the comments provided by Dargent et al. (2018) and appreciate the effort they have made to evaluate our recently reported data on the stability of uranyl(VI) chloride complexes as function of temperature (Migdisov et al., 2018). We also appreciate the opportunity provided by the editor to clarify issues in our paper that were not clearly articulated or in error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
Age estimates for the late quaternary high sea-stands
NASA Astrophysics Data System (ADS)
Smart, Peter L.; Richards, David A.
A database of more than 300 published alpha-counted uranium-series ages has been compiled for coral reef terraces formed by Late Pleistocene high sea-stands. The database was screened to eliminate unreliable age estimates ( {230Th }/{232Th } < 20, calcite > 5%) and those without quoted without quoted errors, and a distributed error frequency curve was produced. This curve can be considered as a finite mixture model comprising k component normal distributions each with a weighting α. By using an expectation maximising algorithm, the mean and standard deviation of the component distributions, each corresponding to a high sea level event, were estimated. Eight high sea-stands with mean and standard deviations of 129.0 ± 33.0, 123.0 ± 13.0, 102.5 ± 2.0, 81.5 ± 5.0, 61.5 ± 6.0, 50.0 ± 1.0, 40.5 ± 5.0 and 33.0 ± 2.5 ka were resolved. The standard deviations are generally larger than the values quoted for individual age estimates. Whilst this may be due to diagenetic effects, especially for the older corals, it is argued that in many cases geological evidence clearly indicates that the high stands are multiple events, often not resolvable at sites with low rates of uplift. The uranium-series dated coral-reef terrace chronology shows good agreement with independent chronologies derived for Antarctic ice cores, although the resolution for the latter is better. Agreement with orbitally-tuned deep-sea core records is also good, but it is argued that Isotope Stage 5e is not a single event, as recorded in the cores, but a multiple event spanning some 12 ka. The much earlier age for Isotope Stage 5e given by Winograd et al. (1988) is not supported by the coral reef data, but further mass-spectrometric uranium-series dating is needed to permit better chronological resolution.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
NASA Astrophysics Data System (ADS)
Olivier, Thomas; Billard, Franck; Akhouayri, Hassan
2004-06-01
Self-focusing is one of the dramatic phenomena that may occur during the propagation of a high power laser beam in a nonlinear material. This phenomenon leads to a degradation of the wave front and may also lead to a photoinduced damage of the material. Realistic simulations of the propagation of high power laser beams require an accurate knowledge of the nonlinear refractive index γ. In the particular case of fused silica and in the nanosecond regime, it seems that electronic mechanisms as well as electrostriction and thermal effects can lead to a significant refractive index variation. Compared to the different methods used to measure this parmeter, the Z-scan method is simple, offers a good sensitivity and may give absolute measurements if the incident beam is accurately studied. However, this method requires a very good knowledge of the incident beam and of its propagation inside a nonlinear sample. We used a split-step propagation algorithm to simlate Z-scan curves for arbitrary beam shape, sample thickness and nonlinear phase shift. According to our simulations and a rigorous analysis of the Z-scan measured signal, it appears that some abusive approximations lead to very important errors. Thus, by reducing possible errors on the interpretation of Z-scan experimental studies, we performed accurate measurements of the nonlinear refractive index of fused silica that show the significant contribution of nanosecond mechanisms.
Tanner, Allan B.; Moxham, Robert M.; Senftle, F.E.
1977-01-01
Two sealed sondes, using germanium gamma-ray detectors cooled by melting propane, have been field tested to depths of 79 m in water-filled boreholes at the Pawnee Uranium Mine in Bee Co., Texas. When, used as total-count devices, the sondes are comparable in logging speed and counting rate with conventional scintillation detectors for locating zones of high radioactivity. When used with a multichannel analyzer, the sondes are detectors with such high resolution that individual lines from the complex spectra of the uranium and thorium series can be distinguished. Gamma rays from each group of the uranium series can be measured in ore zones permitting determination of the state of equilibrium at each measurement point. Series of 10-minute spectra taken at 0.3- to 0.5-m intervals in several holes showed zones where maxima from the uranium group and from the 222Rn group were displaced relative to each other. Apparent excesses of 230Th at some locations suggest that uranium-group concentrations at those locations were severalfold greater some tens of kiloyears, ago. At the current state of development a 10-minute count yields a sensitivity of about 80 ppm U308. Data reduction could in practice be accomplished in about 5 minutes. The result is practically unaffected by disequilibrium or radon contamination. In comparison with core assay, high-resolution spectrometry samples a larger volume; avoids problems due to incomplete core recovery, loss of friable material to drilling fluids, and errors in depth and marking; and permits use of less expensive drilling methods. Because gamma rays from the radionuclides are accumulated simultaneously, it also avoids the problems inherent in trying to correlate logs made in separate runs with different equipment. Continuous-motion delayed-gamma activation by a 163-?g 252Cf neutron source attached to the sonde yielded poor sensitivity. A better neutron-activation method, in which the sonde is moved in steps so as to place the detector at the previous activation point, could not be evaluated because of equipment failure.
The Robustness of Acoustic Analogies
NASA Technical Reports Server (NTRS)
Freund, J. B.; Lele, S. K.; Wei, M.
2004-01-01
Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.
NASA Astrophysics Data System (ADS)
Flores Orozco, AdriáN.; Williams, Kenneth H.; Long, Philip E.; Hubbard, Susan S.; Kemna, Andreas
2011-09-01
Experiments at the Department of Energy's Integrated Field Research Challenge (IFRC) site near Rifle, Colorado, have demonstrated the ability to remove uranium from groundwater by stimulating the growth and activity of Geobacter species through acetate amendment. Prolonging the activity of these strains in order to optimize uranium bioremediation has prompted the development of minimally invasive and spatially extensive monitoring methods diagnostic of their in situ activity and the end products of their metabolism. Here we demonstrate the use of complex resistivity imaging for monitoring biogeochemical changes accompanying stimulation of indigenous aquifer microorganisms during and after a prolonged period (100+ days) of acetate injection. A thorough raw data statistical analysis of discrepancies between normal and reciprocal measurements and incorporation of a new power law phase-error model in the inversion were used to significantly improve the quality of the resistivity phase images over those obtained during previous monitoring experiments at the Rifle IFRC site. The imaging results reveal spatiotemporal changes in the phase response of aquifer sediments, which correlate with increases in Fe(II) and precipitation of metal sulfides (e.g., FeS) following the iterative stimulation of iron and sulfate-reducing microorganisms. Only modest changes in resistivity magnitude were observed over the monitoring period. The largest phase anomalies (>40 mrad) were observed hundreds of days after halting acetate injection, in conjunction with accumulation of Fe(II) in the presence of residual FeS minerals, reflecting preservation of geochemically reduced conditions in the aquifer, a prerequisite for ensuring the long-term stability of immobilized, redox-sensitive contaminants such as uranium.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation
NASA Astrophysics Data System (ADS)
Hatten, Noble; Russell, Ryan P.
2017-12-01
A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.
Ueda, Michihito; Nishitani, Yu; Kaneko, Yukihiro; Omote, Atsushi
2014-01-01
To realize an analog artificial neural network hardware, the circuit element for synapse function is important because the number of synapse elements is much larger than that of neuron elements. One of the candidates for this synapse element is a ferroelectric memristor. This device functions as a voltage controllable variable resistor, which can be applied to a synapse weight. However, its conductance shows hysteresis characteristics and dispersion to the input voltage. Therefore, the conductance values vary according to the history of the height and the width of the applied pulse voltage. Due to the difficulty of controlling the accurate conductance, it is not easy to apply the back-propagation learning algorithm to the neural network hardware having memristor synapses. To solve this problem, we proposed and simulated a learning operation procedure as follows. Employing a weight perturbation technique, we derived the error change. When the error reduced, the next pulse voltage was updated according to the back-propagation learning algorithm. If the error increased the amplitude of the next voltage pulse was set in such way as to cause similar memristor conductance but in the opposite voltage scanning direction. By this operation, we could eliminate the hysteresis and confirmed that the simulation of the learning operation converged. We also adopted conductance dispersion numerically in the simulation. We examined the probability that the error decreased to a designated value within a predetermined loop number. The ferroelectric has the characteristics that the magnitude of polarization does not become smaller when voltages having the same polarity are applied. These characteristics greatly improved the probability even if the learning rate was small, if the magnitude of the dispersion is adequate. Because the dispersion of analog circuit elements is inevitable, this learning operation procedure is useful for analog neural network hardware. PMID:25393715
A Q-Band Free-Space Characterization of Carbon Nanotube Composites
Hassan, Ahmed M.; Garboczi, Edward J.
2016-01-01
We present a free-space measurement technique for non-destructive non-contact electrical and dielectric characterization of nano-carbon composites in the Q-band frequency range of 30 GHz to 50 GHz. The experimental system and error correction model accurately reconstruct the conductivity of composite materials that are either thicker than the wave penetration depth, and therefore exhibit negligible microwave transmission (less than −40 dB), or thinner than the wave penetration depth and, therefore, exhibit significant microwave transmission. This error correction model implements a fixed wave propagation distance between antennas and corrects the complex scattering parameters of the specimen from two references, an air slab having geometrical propagation length equal to that of the specimen under test, and a metallic conductor, such as an aluminum plate. Experimental results were validated by reconstructing the relative dielectric permittivity of known dielectric materials and then used to determine the conductivity of nano-carbon composite laminates. This error correction model can simplify routine characterization of thin conducting laminates to just one measurement of scattering parameters, making the method attractive for research, development, and for quality control in the manufacturing environment. PMID:28057959
Boundary identification and error analysis of shocked material images
NASA Astrophysics Data System (ADS)
Hock, Margaret; Howard, Marylesa; Cooper, Leora; Meehan, Bernard; Nelson, Keith
2017-06-01
To compute quantities such as pressure and velocity from laser-induced shock waves propagating through materials, high-speed images are captured and analyzed. Shock images typically display high noise and spatially-varying intensities, causing conventional analysis techniques to have difficulty identifying boundaries in the images without making significant assumptions about the data. We present a novel machine learning algorithm that efficiently segments, or partitions, images with high noise and spatially-varying intensities, and provides error maps that describe a level of uncertainty in the partitioning. The user trains the algorithm by providing locations of known materials within the image but no assumptions are made on the geometries in the image. The error maps are used to provide lower and upper bounds on quantities of interest, such as velocity and pressure, once boundaries have been identified and propagated through equations of state. This algorithm will be demonstrated on images of shock waves with noise and aberrations to quantify properties of the wave as it progresses. DOE/NV/25946-3126 This work was done by National Security Technologies, LLC, under Contract No. DE- AC52-06NA25946 with the U.S. Department of Energy and supported by the SDRD Program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X.; Wilcox, G.L.
1993-12-31
We have implemented large scale back-propagation neural networks on a 544 node Connection Machine, CM-5, using the C language in MIMD mode. The program running on 512 processors performs backpropagation learning at 0.53 Gflops, which provides 76 million connection updates per second. We have applied the network to the prediction of protein tertiary structure from sequence information alone. A neural network with one hidden layer and 40 million connections is trained to learn the relationship between sequence and tertiary structure. The trained network yields predicted structures of some proteins on which it has not been trained given only their sequences.more » Presentation of the Fourier transform of the sequences accentuates periodicity in the sequence and yields good generalization with greatly increased training efficiency. Training simulations with a large, heterologous set of protein structures (111 proteins from CM-5 time) to solutions with under 2% RMS residual error within the training set (random responses give an RMS error of about 20%). Presentation of 15 sequences of related proteins in a testing set of 24 proteins yields predicted structures with less than 8% RMS residual error, indicating good apparent generalization.« less
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
A map overlay error model based on boundary geometry
Gaeuman, D.; Symanzik, J.; Schmidt, J.C.
2005-01-01
An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
Channel simulation to facilitate mobile-satellite communications research
NASA Technical Reports Server (NTRS)
Davarian, Faramaz
1987-01-01
The mobile-satellite-service channel simulator, which is a facility for an end-to-end hardware simulation of mobile satellite communications links is discussed. Propagation effects, Doppler, interference, band limiting, satellite nonlinearity, and thermal noise have been incorporated into the simulator. The propagation environment in which the simulator needs to operate and the architecture of the simulator are described. The simulator is composed of: a mobile/fixed transmitter, interference transmitters, a propagation path simulator, a spacecraft, and a fixed/mobile receiver. Data from application experiments conducted with the channel simulator are presented; the noise converison technique to evaluate interference effects, the error floor phenomenon of digital multipath fading links, and the fade margin associated with a noncoherent receiver are examined. Diagrams of the simulator are provided.
Wideband propagation measurements at 30.3 GHz through a pecan orchard in Texas
NASA Astrophysics Data System (ADS)
Papazian, Peter B.; Jones, David L.; Espeland, Richard H.
1992-09-01
Wideband propagation measurements were made in a pecan orchard in Texas during April and August of 1990 to examine the propagation characteristics of millimeter-wave signals through vegetation. Measurements were made on tree obstructed paths with and without leaves. The study presents narrowband attenuation data at 9.6 and 28.8 GHz as well as wideband impulse response measurements at 30.3 GHz. The wideband probe (Violette et al., 1983), provides amplitude and delay of reflected and scattered signals and bit-error rate. This is accomplished using a 500 MBit/sec pseudo-random code to BPSK modulate a 28.8 GHz carrier. The channel impulse response is then extracted by cross correlating the received pseudo-random sequence with a locally generated replica.
Niazi, Ali; Khorshidi, Neda; Ghaemmaghami, Pegah
2015-01-25
In this study an analytical procedure based on microwave-assisted dispersive liquid-liquid microextraction (MA-DLLME) and spectrophotometric coupled with chemometrics methods is proposed to determine uranium. In the proposed method, 4-(2-pyridylazo) resorcinol (PAR) is used as a chelating agent, and chloroform and ethanol are selected as extraction and dispersive solvent. The optimization strategy is carried out by using two level full factorial designs. Results of the two level full factorial design (2(4)) based on an analysis of variance demonstrated that the pH, concentration of PAR, amount of dispersive and extraction solvents are statistically significant. Optimal condition for three variables: pH, concentration of PAR, amount of dispersive and extraction solvents are obtained by using Box-Behnken design. Under the optimum conditions, the calibration graphs are linear in the range of 20.0-350.0 ng mL(-1) with detection limit of 6.7 ng mL(-1) (3δB/slope) and the enrichment factor of this method for uranium reached at 135. The relative standard deviation (R.S.D.) is 1.64% (n=7, c=50 ng mL(-1)). The partial least squares (PLS) modeling was used for multivariate calibration of the spectrophotometric data. The orthogonal signal correction (OSC) was used for preprocessing of data matrices and the prediction results of model, with and without using OSC, were statistically compared. MA-DLLME-OSC-PLS method was presented for the first time in this study. The root mean squares error of prediction (RMSEP) for uranium determination using PLS and OSC-PLS models were 4.63 and 0.98, respectively. This procedure allows the determination of uranium synthesis and real samples such as waste water with good reliability of the determination. Copyright © 2014. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Višňák, Jakub; Steudtner, Robin; Kassahun, Andrea; Hoth, Nils
2017-09-01
Natural waters' uranium level monitoring is of great importance for health and environmental protection. One possible detection method is the Time-Resolved Laser-Induced Fluorescence Spectroscopy (TRLFS), which offers the possibility to distinguish different uranium species. The analytical identification of aqueous uranium species in natural water samples is of distinct importance since individual species differ significantly in sorption properties and mobility in the environment. Samples originate from former uranium mine sites and have been provided by Wismut GmbH, Germany. They have been characterized by total elemental concentrations and TRLFS spectra. Uranium in the samples is supposed to be in form of uranyl(VI) complexes mostly with carbonate (CO32- ) and bicarbonate (HCO3- ) and to lesser extend with sulphate (SO42- ), arsenate (AsO43- ), hydroxo (OH- ), nitrate (NO3- ) and other ligands. Presence of alkaline earth metal dications (M = Ca2+ , Mg2+ , Sr2+ ) will cause most of uranyl to prefer ternary complex species, e.g. Mn(UO2)(CO3)32n-4 (n ɛ {1; 2}). From species quenching the luminescence, Cl- and Fe2+ should be mentioned. Measurement has been done under cryogenic conditions to increase the luminescence signal. Data analysis has been based on Singular Value Decomposition and monoexponential fit of corresponding loadings (for separate TRLFS spectra, the "Factor analysis of Time Series" (FATS) method) and Parallel Factor Analysis (PARAFAC, all data analysed simultaneously). From individual component spectra, excitation energies T00, uranyl symmetric mode vibrational frequencies ωgs and excitation driven U-Oyl bond elongation ΔR have been determined and compared with quasirelativistic (TD)DFT/B3LYP theoretical predictions to cross -check experimental data interpretation. Note to the reader: Several errors have been produced in the initial version of this article. This new version published on 23 October 2017 contains all the corrections.
CEMERLL: The Propagation of an Atmosphere-Compensated Laser Beam to the Apollo 15 Lunar Array
NASA Technical Reports Server (NTRS)
Fugate, R. Q.; Leatherman, P. R.; Wilson, K. E.
1997-01-01
Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes.
Research Effort in Atmospheric Propagation.
velocity and air mean free path on wire microthermal measurements was reported. The results were that the procedure of calibrating a microthermal ...molecular mean free path is larger can increase the error another 4%. A discussion of refractive index spectra obtained from airborne microthermal
Statistical error propagation in ab initio no-core full configuration calculations of light nuclei
Navarro Pérez, R.; Amaro, J. E.; Ruiz Arriola, E.; ...
2015-12-28
We propagate the statistical uncertainty of experimental N N scattering data into the binding energy of 3H and 4He. Here, we also study the sensitivity of the magnetic moment and proton radius of the 3 H to changes in the N N interaction. The calculations are made with the no-core full configuration method in a sufficiently large harmonic oscillator basis. For those light nuclei we obtain Δ E stat (3H) = 0.015 MeV and Δ E stat ( 4He) = 0.055 MeV .
NASA Technical Reports Server (NTRS)
Choe, C. Y.; Tapley, B. D.
1975-01-01
A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.
Preconditioning the Helmholtz Equation for Rigid Ducts
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1998-01-01
An innovative hyperbolic preconditioning technique is developed for the numerical solution of the Helmholtz equation which governs acoustic propagation in ducts. Two pseudo-time parameters are used to produce an explicit iterative finite difference scheme. This scheme eliminates the large matrix storage requirements normally associated with numerical solutions to the Helmholtz equation. The solution procedure is very fast when compared to other transient and steady methods. Optimization and an error analysis of the preconditioning factors are present. For validation, the method is applied to sound propagation in a 2D semi-infinite hard wall duct.
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
Analysis of the “naming game” with learning errors in communications
NASA Astrophysics Data System (ADS)
Lou, Yang; Chen, Guanrong
2015-07-01
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
Analysis of the "naming game" with learning errors in communications.
Lou, Yang; Chen, Guanrong
2015-07-16
Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
The Use of Neural Networks for Determining Tank Routes
1992-09-01
ADDRESS (City, State, and ZIP Code) Monterey, CA 93943-5000 Monterey, CA 93943-5000 &a. NAME OF FUNDINGJSPONSORING 8b. OFFICE SYMBOL 9. PROCUREMENT...Weights Figure 1. Neural Network Architecture 6 The back-error propagation technique iteratively assigns weights to connections, computes the errors...neurons as the start. From that we decided to try 4, 6 , 8, 10, 12, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90 and 100 or until it was obvious that
Huizinga, Richard J.
2011-01-01
The size of the scour holes observed at the surveyed sites likely was affected by the low to moderate flow conditions on the Missouri and Mississippi Rivers at the time of the surveys. The scour holes likely would be larger during conditions of increased flow. Artifacts of horizontal positioning errors were present in the data, but an analysis of the surveys indicated that most of the bathymetric data have a total propagated error of less than 0.33 foot.
Control of secondary electrons from ion beam impact using a positive potential electrode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowley, T. P., E-mail: tpcrowley@xanthotechnologies.com; Demers, D. R.; Fimognari, P. J.
2016-11-15
Secondary electrons emitted when an ion beam impacts a detector can amplify the ion beam signal, but also introduce errors if electrons from one detector propagate to another. A potassium ion beam and a detector comprised of ten impact wires, four split-plates, and a pair of biased electrodes were used to demonstrate that a low-voltage, positive electrode can be used to maintain the beneficial amplification effect while greatly reducing the error introduced from the electrons traveling between detector elements.
Developing a confidence metric for the Landsat land surface temperature product
NASA Astrophysics Data System (ADS)
Laraby, Kelly G.; Schott, John R.; Raqueno, Nina
2016-05-01
Land Surface Temperature (LST) is an important Earth system data record that is useful to fields such as change detection, climate research, environmental monitoring, and smaller scale applications such as agriculture. Certain Earth-observing satellites can be used to derive this metric, and it would be extremely useful if such imagery could be used to develop a global product. Through the support of the National Aeronautics and Space Administration (NASA) and the United States Geological Survey (USGS), a LST product for the Landsat series of satellites has been developed. Currently, it has been validated for scenes in North America, with plans to expand to a trusted global product. For ideal atmospheric conditions (e.g. stable atmosphere with no clouds nearby), the LST product underestimates the surface temperature by an average of 0.26 K. When clouds are directly above or near the pixel of interest, however, errors can extend to several Kelvin. As the product approaches public release, our major goal is to develop a quality metric that will provide the user with a per-pixel map of estimated LST errors. There are several sources of error that are involved in the LST calculation process, but performing standard error propagation is a difficult task due to the complexity of the atmospheric propagation component. To circumvent this difficulty, we propose to utilize the relationship between cloud proximity and the error seen in the LST process to help develop a quality metric. This method involves calculating the distance to the nearest cloud from a pixel of interest in a scene, and recording the LST error at that location. Performing this calculation for hundreds of scenes allows us to observe the average LST error for different ranges of distances to the nearest cloud. This paper describes this process in full, and presents results for a large set of Landsat scenes.
Deductive Verification of Cryptographic Software
NASA Technical Reports Server (NTRS)
Almeida, Jose Barcelar; Barbosa, Manuel; Pinto, Jorge Sousa; Vieira, Barbara
2009-01-01
We report on the application of an off-the-shelf verification platform to the RC4 stream cipher cryptographic software implementation (as available in the openSSL library), and introduce a deductive verification technique based on self-composition for proving the absence of error propagation.
Performance Characterization of an Instrument.
ERIC Educational Resources Information Center
Salin, Eric D.
1984-01-01
Describes an experiment designed to teach students to apply the same statistical awareness to instrumentation they commonly apply to classical techniques. Uses propagation of error techniques to pinpoint instrumental limitations and breakdowns and to demonstrate capabilities and limitations of volumetric and gravimetric methods. Provides lists of…
Simulations of a PSD Plastic Neutron Collar for Assaying Fresh Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hausladen, Paul; Newby, Jason; McElroy, Robert Dennis
The potential performance of a notional active coincidence collar for assaying uranium fuel based on segmented detectors constructed from the new PSD plastic fast organic scintillator with pulse shape discrimination capability was investigated in simulation. Like the International Atomic Energy Agency's present Uranium Neutron Collar for LEU (UNCL), the PSD plastic collar would also function by stimulating fission in the 235U content of the fuel with a moderated 241Am/Li neutron source and detecting instances of induced fission via neutron coincidence counting. In contrast to the moderated detectors of the UNCL, the fast time scale of detection in the scintillator eliminatesmore » statistical errors due to accidental coincidences that limit the performance of the UNCL. However, the potential to detect a single neutron multiple times historically has been one of the properties of organic scintillator detectors that has prevented their adoption for international safeguards applications. Consequently, as part of the analysis of simulated data, a method was developed by which true neutron-neutron coincidences can be distinguished from inter-detector scatter that takes advantage of the position and timing resolution of segmented detectors. Then, the performance of the notional simulated coincidence collar was evaluated for assaying a variety of fresh fuels, including some containing burnable poisons and partial defects. In these simulations, particular attention was paid to the analysis of fast mode measurements. In fast mode, a Cd liner is placed inside the collar to shield the fuel from the interrogating source and detector moderators, thereby eliminating the thermal neutron flux that is most sensitive to the presence of burnable poisons that are ubiquitous in modern nuclear fuels. The simulations indicate that the predicted precision of fast mode measurements is similar to what can be achieved by the present UNCL in thermal mode. For example, the statistical accuracy of a ten-minute measurement of fission coincidences collected in fast mode will be approximately 1% for most fuels of interest, yielding a ~1.4% error after subtraction of a five minute measurement of the spontaneous fissions from 238U in the fuel, a ~2% error in analyzed linear density after accounting for the slope of the calibration curve, and a ~2.9% total error after addition of an assumed systematic error of 2%.« less
Characterization of electrophysiological propagation by multichannel sensors
Bradshaw, L. Alan; Kim, Juliana H.; Somarajan, Suseela; Richards, William O.; Cheng, Leo K.
2016-01-01
Objective The propagation of electrophysiological activity measured by multichannel devices could have significant clinical implications. Gastric slow waves normally propagate along longitudinal paths that are evident in recordings of serosal potentials and transcutaneous magnetic fields. We employed a realistic model of gastric slow wave activity to simulate the transabdominal magnetogastrogram (MGG) recorded in a multichannel biomagnetometer and to determine characteristics of electrophysiological propagation from MGG measurements. Methods Using MGG simulations of slow wave sources in a realistic abdomen (both superficial and deep sources) and in a horizontally-layered volume conductor, we compared two analytic methods (Second Order Blind Identification, SOBI and Surface Current Density, SCD) that allow quantitative characterization of slow wave propagation. We also evaluated the performance of the methods with simulated experimental noise. The methods were also validated in an experimental animal model. Results Mean square errors in position estimates were within 2 cm of the correct position, and average propagation velocities within 2 mm/s of the actual velocities. SOBI propagation analysis outperformed the SCD method for dipoles in the superficial and horizontal layer models with and without additive noise. The SCD method gave better estimates for deep sources, but did not handle additive noise as well as SOBI. Conclusion SOBI-MGG and SCD-MGG were used to quantify slow wave propagation in a realistic abdomen model of gastric electrical activity. Significance These methods could be generalized to any propagating electrophysiological activity detected by multichannel sensor arrays. PMID:26595907
Performance of cellular frequency-hopped spread-spectrum radio networks
NASA Astrophysics Data System (ADS)
Gluck, Jeffrey W.; Geraniotis, Evaggelos
1989-10-01
Multiple access interference is characterized for cellular mobile networks, in which users are assumed to be Poisson-distributed in the plane and employ frequency-hopped spread-spectrum signaling with transmitter-oriented assignment of frequency-hopping patterns. Exact expressions for the bit error probabilities are derived for binary coherently demodulated systems without coding. Approximations for the packet error probability are derived for coherent and noncoherent systems and these approximations are applied when forward-error-control coding is employed. In all cases, the effects of varying interference power are accurately taken into account according to some propagation law. Numerical results are given in terms of bit error probability for the exact case and throughput for the approximate analyses. Comparisons are made with previously derived bounds and it is shown that these tend to be very pessimistic.
Identifying medication error chains from critical incident reports: a new analytic approach.
Huckels-Baumgart, Saskia; Manser, Tanja
2014-10-01
Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.
NASA Astrophysics Data System (ADS)
Robey, H. F.; Berzak Hopkins, L.; Milovich, J. L.; Meezan, N. B.
2018-01-01
Recent work in indirectly-driven inertial confinement fusion implosions on the National Ignition Facility has indicated that late-time propagation of the inner cones of laser beams (23° and 30°) is impeded by the growth of a "bubble" of hohlraum wall material (Au or depleted uranium), which is initiated by and is located at the location where the higher-intensity outer beams (44° and 50°) hit the hohlraum wall. The absorption of the inner cone beams by this "bubble" reduces the laser energy reaching the hohlraum equator at late time driving an oblate or pancaked implosion, which limits implosion performance. In this article, we present the design of a new shaped hohlraum designed specifically to reduce the impact of this bubble by adding a recessed pocket at the location where the outer cones hit the hohlraum wall. This recessed pocket displaces the bubble radially outward, reducing the inward penetration of the bubble at all times throughout the implosion and increasing the time for inner beam propagation by approximately 1 ns. This increased laser propagation time allows one to drive a larger capsule, which absorbs more energy and is predicted to improve implosion performance. The new design is based on a recent National Ignition Facility shot, N170601, which produced a record neutron yield. The expansion rate and absorption of laser energy by the bubble is quantified for both cylindrical and shaped hohlraums, and the predicted performance is compared.
Orbit covariance propagation via quadratic-order state transition matrix in curvilinear coordinates
NASA Astrophysics Data System (ADS)
Hernando-Ayuso, Javier; Bombardelli, Claudio
2017-09-01
In this paper, an analytical second-order state transition matrix (STM) for relative motion in curvilinear coordinates is presented and applied to the problem of orbit uncertainty propagation in nearly circular orbits (eccentricity smaller than 0.1). The matrix is obtained by linearization around a second-order analytical approximation of the relative motion recently proposed by one of the authors and can be seen as a second-order extension of the curvilinear Clohessy-Wiltshire (C-W) solution. The accuracy of the uncertainty propagation is assessed by comparison with numerical results based on Monte Carlo propagation of a high-fidelity model including geopotential and third-body perturbations. Results show that the proposed STM can greatly improve the accuracy of the predicted relative state: the average error is found to be at least one order of magnitude smaller compared to the curvilinear C-W solution. In addition, the effect of environmental perturbations on the uncertainty propagation is shown to be negligible up to several revolutions in the geostationary region and for a few revolutions in low Earth orbit in the worst case.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
NASA Astrophysics Data System (ADS)
Butt, Ali
Crack propagation in a solid rocket motor environment is difficult to measure directly. This experimental and analytical study evaluated the viability of real-time radiography for detecting bore regression and propellant crack propagation speed. The scope included the quantitative interpretation of crack tip velocity from simulated radiographic images of a burning, center-perforated grain and actual real-time radiographs taken on a rapid-prototyped model that dynamically produced the surface movements modeled in the simulation. The simplified motor simulation portrayed a bore crack that propagated radially at a speed that was 10 times the burning rate of the bore. Comparing the experimental image interpretation with the calibrated surface inputs, measurement accuracies were quantified. The average measurements of the bore radius were within 3% of the calibrated values with a maximum error of 7%. The crack tip speed could be characterized with image processing algorithms, but not with the dynamic calibration data. The laboratory data revealed that noise in the transmitted X-Ray intensity makes sensing the crack tip propagation using changes in the centerline transmitted intensity level impractical using the algorithms employed.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Random synaptic feedback weights support error backpropagation for deep learning
NASA Astrophysics Data System (ADS)
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-11-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
Random synaptic feedback weights support error backpropagation for deep learning
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-01-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044
Li, Tao; Yuan, Gannan; Li, Wang
2016-01-01
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition. PMID:26999130
Li, Tao; Yuan, Gannan; Li, Wang
2016-03-15
The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.
NASA Astrophysics Data System (ADS)
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-07-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.
NASA Astrophysics Data System (ADS)
Bhuiyan, M. A. E.; Nikolopoulos, E. I.; Anagnostou, E. N.
2017-12-01
Quantifying the uncertainty of global precipitation datasets is beneficial when using these precipitation products in hydrological applications, because precipitation uncertainty propagation through hydrologic modeling can significantly affect the accuracy of the simulated hydrologic variables. In this research the Iberian Peninsula has been used as the study area with a study period spanning eleven years (2000-2010). This study evaluates the performance of multiple hydrologic models forced with combined global rainfall estimates derived based on a Quantile Regression Forests (QRF) technique. In QRF technique three satellite precipitation products (CMORPH, PERSIANN, and 3B42 (V7)); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset are being utilized in this study. A high-resolution, ground-based observations driven precipitation dataset (named SAFRAN) available at 5 km/1 h resolution is used as reference. Through the QRF blending framework the stochastic error model produces error-adjusted ensemble precipitation realizations, which are used to force four global hydrological models (JULES (Joint UK Land Environment Simulator), WaterGAP3 (Water-Global Assessment and Prognosis), ORCHIDEE (Organizing Carbon and Hydrology in Dynamic Ecosystems) and SURFEX (Stands for Surface Externalisée) ) to simulate three hydrologic variables (surface runoff, subsurface runoff and evapotranspiration). The models are forced with the reference precipitation to generate reference-based hydrologic simulations. This study presents a comparative analysis of multiple hydrologic model simulations for different hydrologic variables and the impact of the blending algorithm on the simulated hydrologic variables. Results show how precipitation uncertainty propagates through the different hydrologic model structures to manifest in reduction of error in hydrologic variables.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.
Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
Resolution of the COBE Earth sensor anomaly
NASA Technical Reports Server (NTRS)
Sedler, J.
1993-01-01
Since its launch on November 18, 1989, the Earth sensors on the Cosmic Background Explorer (COBE) have shown much greater noise than expected. The problem was traced to an error in Earth horizon acquisition-of-signal (AOS) times. Due to this error, the AOS timing correction was ignored, causing Earth sensor split-to-index (SI) angles to be incorrectly time-tagged to minor frame synchronization times. Resulting Earth sensor residuals, based on gyro-propagated fine attitude solutions, were as large as plus or minus 0.45 deg (much greater than plus or minus 0.10 deg from scanner specifications (Reference 1)). Also, discontinuities in single-frame coarse attitude pitch and roll angles (as large as 0.80 and 0.30 deg, respectively) were noted several times during each orbit. However, over the course of the mission, each Earth sensor was observed to independently and unexpectedly reset and then reactivate into a new configuration. Although the telemetered AOS timing corrections are still in error, a procedure has been developed to approximate and apply these corrections. This paper describes the approach, analysis, and results of approximating and applying AOS timing adjustments to correct Earth scanner data. Furthermore, due to the continuing degradation of COBE's gyroscopes, gyro-propagated fine attitude solutions may soon become unavailable, requiring an alternative method for attitude determination. By correcting Earth scanner AOS telemetry, as described in this paper, more accurate single-frame attitude solutions are obtained. All aforementioned pitch and roll discontinuities are removed. When proper AOS corrections are applied, the standard deviation of pitch residuals between coarse attitude and gyro-propagated fine attitude solutions decrease by a factor of 3. Also, the overall standard deviation of SI residuals from fine attitude solutions decrease by a factor of 4 (meeting sensor specifications) when AOS corrections are applied.
The algorithm study for using the back propagation neural network in CT image segmentation
NASA Astrophysics Data System (ADS)
Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi
2017-01-01
Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.
GNSS-Reflectometry aboard ISS with GEROS: Investigation of atmospheric propagation effects
NASA Astrophysics Data System (ADS)
Zus, F.; Heise, S.; Wickert, J.; Semmling, M.
2015-12-01
GEROS-ISS (GNSS rEflectometry Radio Occultation and Scatterometry) is an ESA mission aboard the International Space Station (ISS). The main mission goals are the determination of the sea surface height and surface winds. Secondary goals are monitoring of land surface parameters and atmosphere sounding using GNSS radio occultation measurements. The international scientific study GARCA (GNSS-Reflectometry Assessment of Requirements and Consolidation of Retrieval Algorithms), funded by ESA, is part of the preparations for GEROS-ISS. Major goals of GARCA are the development of an end2end Simulator for the GEROS-ISS measurements (GEROS-SIM) and the evaluation of the error budget of the GNSS reflectometry measurements. In this presentation we introduce some of the GARCA activities to quantify the influence of the ionized and neutral atmosphere on the altimetric measurements, which is a major error source for GEROS-ISS. At first, we analyse, to which extend the standard linear combination of interferometric paths at different carrier frequencies can be used to correct for the ionospheric propagation effects. Second, we make use of the tangent-linear version of our ray-trace algorithm to propagate the uncertainty of the underlying refractivity profile into the uncertainty of the interferometric path. For comparison the sensitivity of the interferometric path with respect to the sea surface height is computed. Though our calculations are based on a number of simplifying assumptions (the Earth is a sphere, the atmosphere is spherically layered and the ISS and GNSS satellite orbits are circular) some general conclusions can be drawn. In essence, for elevation angles above -5° at the ISS the higher-order ionospheric errors and the uncertaintiy of the inteferometric path due to the uncertainty of the underlying refractivity profile are small enough to distinguish a sea surface height of ± 0.5 m.
Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.
2015-01-01
Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535
Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemeyer, Kyle E.; Sung, Chih-Jen
Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less
Mechanism reduction for multicomponent surrogates: A case study using toluene reference fuels
Niemeyer, Kyle E.; Sung, Chih-Jen
2014-11-01
Strategies and recommendations for performing skeletal reductions of multicomponent surrogate fuels are presented, through the generation and validation of skeletal mechanisms for a three-component toluene reference fuel. Using the directed relation graph with error propagation and sensitivity analysis method followed by a further unimportant reaction elimination stage, skeletal mechanisms valid over comprehensive and high-temperature ranges of conditions were developed at varying levels of detail. These skeletal mechanisms were generated based on autoignition simulations, and validation using ignition delay predictions showed good agreement with the detailed mechanism in the target range of conditions. When validated using phenomena other than autoignition, suchmore » as perfectly stirred reactor and laminar flame propagation, tight error control or more restrictions on the reduction during the sensitivity analysis stage were needed to ensure good agreement. In addition, tight error limits were needed for close prediction of ignition delay when varying the mixture composition away from that used for the reduction. In homogeneous compression-ignition engine simulations, the skeletal mechanisms closely matched the point of ignition and accurately predicted species profiles for lean to stoichiometric conditions. Furthermore, the efficacy of generating a multicomponent skeletal mechanism was compared to combining skeletal mechanisms produced separately for neat fuel components; using the same error limits, the latter resulted in a larger skeletal mechanism size that also lacked important cross reactions between fuel components. Based on the present results, general guidelines for reducing detailed mechanisms for multicomponent fuels are discussed.« less
NASA Astrophysics Data System (ADS)
Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.
2011-12-01
A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.
Denton, J S; Murrell, M T; Goldstein, S J; Nunn, A J; Amato, R S; Hinrichs, K A
2013-10-15
Recent advances in high-resolution, rapid, in situ microanalytical techniques present numerous opportunities for the analytical community, provided accurately characterized reference materials are available. Here, we present multicollector thermal ionization mass spectrometry (MC-TIMS) and multicollector inductively coupled plasma mass spectrometry (MC-ICP-MS) uranium and thorium concentration and isotopic data obtained by isotope dilution for a suite of newly available Chinese Geological Standard Glasses (CGSG) designed for microanalysis. These glasses exhibit a range of compositions including basalt, syenite, andesite, and a soil. Uranium concentrations for these glasses range from ∼2 to 14 μg g(-1), Th/U weight ratios range from ∼4 to 6, (234)U/(238)U activity ratios range from 0.93 to 1.02, and (230)Th/(238)U activity ratios range from 0.98 to 1.12. Uranium and thorium concentration and isotopic data are also presented for a rhyolitic obsidian from Macusani, SE Peru (macusanite). This glass can also be used as a rhyolitic reference material, has a very low Th/U weight ratio (around 0.077), and is approximately in (238)U-(234)U-(230)Th secular equilibrium. The U-Th concentration data agree with but are significantly more precise than those previously measured. U-Th concentration and isotopic data agree within estimated errors for the two measurement techniques, providing validation of the two methods. The large (238)U-(234)U-(230)Th disequilibria for some of the glasses, along with the wide range in their chemical compositions and Th/U ratios should provide useful reference points for the U-series analytical community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flores-Orozco, Adrian; Williams, Kenneth H.; Long, Philip E.
2011-07-07
Experiments at the Department of Energy’s Rifle Integrated Field Research Challenge (IFRC) site near Rifle, Colorado (USA) have demonstrated the ability to remove uranium from groundwater by stimulating the growth and activity of Geobacter species through acetate amendment. Prolonging the activity of these strains in order to optimize uranium bioremediation has prompted the development of minimally-invasive and spatially-extensive monitoring methods diagnostic of their in situ activity and the end products of their metabolism. Here we demonstrate the use of complex resistivity imaging for monitoring biogeochemical changes accompanying stimulation of indigenous aquifer microorganisms during and after a prolonged period (100+ days)more » of acetate injection. A thorough raw-data statistical analysis of discrepancies between normal and reciprocal measurements and incorporation of a new power-law phase-error model in the inversion were used to significantly improve the quality of the resistivity phase images over those obtained during previous monitoring experiments at the Rifle IRFC site. The imaging results reveal spatiotemporal changes in the phase response of aquifer sediments, which correlate with increases in Fe(II) and precipitation of metal sulfides (e.g., FeS) following the iterative stimulation of iron and sulfate reducing microorganism. Only modest changes in resistivity magnitude were observed over the monitoring period. The largest phase anomalies (>40 mrad) were observed hundreds of days after halting acetate injection, in conjunction with accumulation of Fe(II) in the presence of residual FeS minerals, reflecting preservation of geochemically reduced conditions in the aquifer – a prerequisite for ensuring the long-term stability of immobilized, redox-sensitive contaminants, such as uranium.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orozco, A. Flores; Williams, K.H.; Long, P.E.
2011-04-01
Experiments at the Department of Energy's Rifle Integrated Field Research Challenge (IFRC) site near Rifle, Colorado (USA) have demonstrated the ability to remove uranium from groundwater by stimulating the growth and activity of Geobacter species through acetate amendment. Prolonging the activity of these strains in order to optimize uranium bioremediation has prompted the development of minimally-invasive and spatially-extensive monitoring methods diagnostic of their in situ activity and the end products of their metabolism. Here we demonstrate the use of complex resistivity imaging for monitoring biogeochemical changes accompanying stimulation of indigenous aquifer microorganisms during and after a prolonged period (100+ days)more » of acetate injection. A thorough raw-data statistical analysis of discrepancies between normal and reciprocal measurements and incorporation of a new power-law phase-error model in the inversion were used to significantly improve the quality of the resistivity phase images over those obtained during previous monitoring experiments at the Rifle IRFC site. The imaging results reveal spatiotemporal changes in the phase response of aquifer sediments, which correlate with increases in Fe(II) and precipitation of metal sulfides (e.g., FeS) following the iterative stimulation of iron and sulfate reducing microorganism. Only modest changes in resistivity magnitude were observed over the monitoring period. The largest phase anomalies (>40 mrad) were observed hundreds of days after halting acetate injection, in conjunction with accumulation of Fe(II) in the presence of residual FeS minerals, reflecting preservation of geochemically reduced conditions in the aquifer - a prerequisite for ensuring the long-term stability of immobilized, redox-sensitive contaminants, such as uranium.« less
VizieR Online Data Catalog: V and R CCD photometry of visual binaries (Abad+, 2004)
NASA Astrophysics Data System (ADS)
Abad, C.; Docobo, J. A.; Lanchares, V.; Lahulla, J. F.; Abelleira, P.; Blanco, J.; Alvarez, C.
2003-11-01
Table 1 gives relevant data for the visual binaries observed. Observations were carried out over a short period of time, therefore we assign the mean epoch (1998.58) for the totality of data. Data of individual stars are presented as average data with errors, by parameter, when various observations have been calculated, as well as the number of observations involved. Errors corresponding to astrometric relative positions between components are always present. For single observations, parameter fitting errors, specially for dx and dy parameters, have been calculated analysing the chi2 test around the minimum. Following the rules for error propagation, theta and rho errors can be estimated. Then, Table 1 shows single observation errors with an additional significant digit. When a star does not have known references, we include it in Table 2, where J2000 position and magnitudes are from the USNO-A2.0 catalogue (Monet et al., 1998, Cat. ). (2 data files).
NASA Astrophysics Data System (ADS)
Murshid, Syed H.; Chakravarty, Abhijit
2011-06-01
Spatial domain multiplexing (SDM) utilizes co-propagation of exactly the same wavelength in optical fibers to increase the bandwidth by integer multiples. Input signals from multiple independent single mode pigtail laser sources are launched at different input angles into a single multimode carrier fiber. The SDM channels follow helical paths and traverse through the carrier fiber without interfering with each other. The optical energy from the different sources is spatially distributed and takes the form of concentric circular donut shaped rings, where each ring corresponds to an independent laser source. At the output end of the fiber these donut shaped independent channels can be separated either with the help of bulk optics or integrated concentric optical detectors. This presents the experimental setup and results for a four channel SDM system. The attenuation and bit error rate for individual channels of such a system is also presented.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
Mekid, Samir; Vacharanukul, Ketsaya
2006-01-01
To achieve dynamic error compensation in CNC machine tools, a non-contact laser probe capable of dimensional measurement of a workpiece while it is being machined has been developed and presented in this paper. The measurements are automatically fed back to the machine controller for intelligent error compensations. Based on a well resolved laser Doppler technique and real time data acquisition, the probe delivers a very promising dimensional accuracy at few microns over a range of 100 mm. The developed optical measuring apparatus employs a differential laser Doppler arrangement allowing acquisition of information from the workpiece surface. In addition, the measurements are traceable to standards of frequency allowing higher precision.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
NASA Astrophysics Data System (ADS)
Cecinati, Francesca; Rico-Ramirez, Miguel Angel; Heuvelink, Gerard B. M.; Han, Dawei
2017-05-01
The application of radar quantitative precipitation estimation (QPE) to hydrology and water quality models can be preferred to interpolated rainfall point measurements because of the wide coverage that radars can provide, together with a good spatio-temporal resolutions. Nonetheless, it is often limited by the proneness of radar QPE to a multitude of errors. Although radar errors have been widely studied and techniques have been developed to correct most of them, residual errors are still intrinsic in radar QPE. An estimation of uncertainty of radar QPE and an assessment of uncertainty propagation in modelling applications is important to quantify the relative importance of the uncertainty associated to radar rainfall input in the overall modelling uncertainty. A suitable tool for this purpose is the generation of radar rainfall ensembles. An ensemble is the representation of the rainfall field and its uncertainty through a collection of possible alternative rainfall fields, produced according to the observed errors, their spatial characteristics, and their probability distribution. The errors are derived from a comparison between radar QPE and ground point measurements. The novelty of the proposed ensemble generator is that it is based on a geostatistical approach that assures a fast and robust generation of synthetic error fields, based on the time-variant characteristics of errors. The method is developed to meet the requirement of operational applications to large datasets. The method is applied to a case study in Northern England, using the UK Met Office NIMROD radar composites at 1 km resolution and at 1 h accumulation on an area of 180 km by 180 km. The errors are estimated using a network of 199 tipping bucket rain gauges from the Environment Agency. 183 of the rain gauges are used for the error modelling, while 16 are kept apart for validation. The validation is done by comparing the radar rainfall ensemble with the values recorded by the validation rain gauges. The validated ensemble is then tested on a hydrological case study, to show the advantage of probabilistic rainfall for uncertainty propagation. The ensemble spread only partially captures the mismatch between the modelled and the observed flow. The residual uncertainty can be attributed to other sources of uncertainty, in particular to model structural uncertainty, parameter identification uncertainty, uncertainty in other inputs, and uncertainty in the observed flow.
Aliased tidal errors in TOPEX/POSEIDON sea surface height data
NASA Technical Reports Server (NTRS)
Schlax, Michael G.; Chelton, Dudley B.
1994-01-01
Alias periods and wavelengths for the M(sub 2, S(sub 2), N(sub 2), K(sub 1), O(sub 1), and P(sub 1) tidal constituents are calculated for TOPEX/POSEIDON. Alias wavelenghts calculated in previous studies are shown to be in error, and a correct method is presented. With the exception of the K(sub 1) constituent, all of these tidal aliases for TOPEX/POSEIDON have periods shorter than 90 days and are likely to be confounded with long-period sea surface height signals associated with real ocean processes. In particular, the correspondence between the periods and wavelengths of the M(sub 2) alias and annual baroclinic Rossby waves that plagued Geosat sea surface height data is avoided. The potential for aliasing residual tidal errors in smoothed estimates of sea surface height is calculated for the six tidal constituents. The potential for aliasing the lunar tidal constituents M(sub 2), N(sub 2) and O(sub 1) fluctuates with latitude and is different for estimates made at the crossovers of ascending and descending ground tracks than for estimates at points midway between crossovers. The potential for aliasing the solar tidal constituents S(sub 2), K(sub 1) and P(sub 1) varies smoothly with latitude. S(sub 2) is strongly aliased for latitudes within 50 degress of the equator, while K(sub 1) and P(sub 1) are only weakly aliased in that range. A weighted least squares method for estimating and removing residual tidal errors from TOPEX/POSEIDON sea surface height data is presented. A clear understanding of the nature of aliased tidal error in TOPEX/POSEIDON data aids the unambiguous identification of real propagating sea surface height signals. Unequivocal evidence of annual period, westward propagating waves in the North Atlantic is presented.
Ionospheric Impacts on UHF Space Surveillance
NASA Astrophysics Data System (ADS)
Jones, J. C.
2017-12-01
Earth's atmosphere contains regions of ionized plasma caused by the interaction of highly energetic solar radiation. This region of ionization is called the ionosphere and varies significantly with altitude, latitude, local solar time, season, and solar cycle. Significant ionization begins at about 100 km (E layer) with a peak in the ionization at about 300 km (F2 layer). Above the F2 layer, the atmosphere is mostly ionized but the ion and electron densities are low due to the unavailability of neutral molecules for ionization so the density decreases exponentially with height to well over 1000 km. The gradients of these variations in the ionosphere play a significant role in radio wave propagation. These gradients induce variations in the index of refraction and cause some radio waves to refract. The amount of refraction depends on the magnitude and direction of the electron density gradient and the frequency of the radio wave. The refraction is significant at HF frequencies (3-30 MHz) with decreasing effects toward the UHF (300-3000 MHz) range. UHF is commonly used for tracking of space objects in low Earth orbit (LEO). While ionospheric refraction is small for UHF frequencies, it can cause errors in range, azimuth angle, and elevation angle estimation by ground-based radars tracking space objects. These errors can cause significant errors in precise orbit determinations. For radio waves transiting the ionosphere, it is important to understand and account for these effects. Using a sophisticated radio wave propagation tool suite and an empirical ionospheric model, we calculate the errors induced by the ionosphere in a simulation of a notional space surveillance radar tracking objects in LEO. These errors are analyzed to determine daily, monthly, annual, and solar cycle trends. Corrections to surveillance radar measurements can be adapted from our simulation capability.
Implementations of back propagation algorithm in ecosystems applications
NASA Astrophysics Data System (ADS)
Ali, Khalda F.; Sulaiman, Riza; Elamir, Amir Mohamed
2015-05-01
Artificial Neural Networks (ANNs) have been applied to an increasing number of real world problems of considerable complexity. Their most important advantage is in solving problems which are too complex for conventional technologies, that do not have an algorithmic solutions or their algorithmic Solutions is too complex to be found. In general, because of their abstraction from the biological brain, ANNs are developed from concept that evolved in the late twentieth century neuro-physiological experiments on the cells of the human brain to overcome the perceived inadequacies with conventional ecological data analysis methods. ANNs have gained increasing attention in ecosystems applications, because of ANN's capacity to detect patterns in data through non-linear relationships, this characteristic confers them a superior predictive ability. In this research, ANNs is applied in an ecological system analysis. The neural networks use the well known Back Propagation (BP) Algorithm with the Delta Rule for adaptation of the system. The Back Propagation (BP) training Algorithm is an effective analytical method for adaptation of the ecosystems applications, the main reason because of their capacity to detect patterns in data through non-linear relationships. This characteristic confers them a superior predicting ability. The BP algorithm uses supervised learning, which means that we provide the algorithm with examples of the inputs and outputs we want the network to compute, and then the error is calculated. The idea of the back propagation algorithm is to reduce this error, until the ANNs learns the training data. The training begins with random weights, and the goal is to adjust them so that the error will be minimal. This research evaluated the use of artificial neural networks (ANNs) techniques in an ecological system analysis and modeling. The experimental results from this research demonstrate that an artificial neural network system can be trained to act as an expert ecosystem analyzer for many applications in ecological fields. The pilot ecosystem analyzer shows promising ability for generalization and requires further tuning and refinement of the basis neural network system for optimal performance.
A transition matrix approach to the Davenport gryo calibration scheme
NASA Technical Reports Server (NTRS)
Natanson, G. A.
1998-01-01
The in-flight gyro calibration scheme commonly used by NASA Goddard Space Flight Center (GSFC) attitude ground support teams closely follows an original version of the Davenport algorithm developed in the late seventies. Its basic idea is to minimize the least-squares differences between attitudes gyro- propagated over the course of a maneuver and those determined using post- maneuver sensor measurements. The paper represents the scheme in a recursive form by combining necessary partials into a rectangular matrix, which is propagated in exactly the same way as a Kalman filters square transition matrix. The nontrivial structure of the propagation matrix arises from the fact that attitude errors are not included in the state vector, and therefore their derivatives with respect to estimated a parameters do not appear in the transition matrix gyro defined in the conventional way. In cases when the required accuracy can be achieved by a single iteration, representation of the Davenport gyro calibration scheme in a recursive form allows one to discard each gyro measurement immediately after it was used to propagate the attitude and state transition matrix. Another advantage of the new approach is that it utilizes the same expression for the error sensitivity matrix as that used by the Kalman filter. As a result the suggested modification of the Davenport algorithm made it possible to reuse software modules implemented in the Kalman filter estimator, where both attitude errors and gyro calibration parameters are included in the state vector. The new approach has been implemented in the ground calibration utilities used to support the Tropical Rainfall Measuring Mission (TRMM). The paper analyzes some preliminary results of gyro calibration performed by the TRMM ground attitude support team. It is demonstrated that an effect of the second iteration on estimated values of calibration parameters is negligibly small, and therefore there is no need to store processed gyro data. This opens a promising opportunity for onboard implementation of the suggested recursive procedure by combining, it with the Kalman filter used to obtain necessary attitude solutions at the beginning and end of each maneuver.
Uncertainty Analysis in Large Area Aboveground Biomass Mapping
NASA Astrophysics Data System (ADS)
Baccini, A.; Carvalho, L.; Dubayah, R.; Goetz, S. J.; Friedl, M. A.
2011-12-01
Satellite and aircraft-based remote sensing observations are being more frequently used to generate spatially explicit estimates of aboveground carbon stock of forest ecosystems. Because deforestation and forest degradation account for circa 10% of anthropogenic carbon emissions to the atmosphere, policy mechanisms are increasingly recognized as a low-cost mitigation option to reduce carbon emission. They are, however, contingent upon the capacity to accurately measures carbon stored in the forests. Here we examine the sources of uncertainty and error propagation in generating maps of aboveground biomass. We focus on characterizing uncertainties associated with maps at the pixel and spatially aggregated national scales. We pursue three strategies to describe the error and uncertainty properties of aboveground biomass maps, including: (1) model-based assessment using confidence intervals derived from linear regression methods; (2) data-mining algorithms such as regression trees and ensembles of these; (3) empirical assessments using independently collected data sets.. The latter effort explores error propagation using field data acquired within satellite-based lidar (GLAS) acquisitions versus alternative in situ methods that rely upon field measurements that have not been systematically collected for this purpose (e.g. from forest inventory data sets). A key goal of our effort is to provide multi-level characterizations that provide both pixel and biome-level estimates of uncertainties at different scales.
Quantifying radar-rainfall uncertainties in urban drainage flow modelling
NASA Astrophysics Data System (ADS)
Rico-Ramirez, M. A.; Liguori, S.; Schellart, A. N. A.
2015-09-01
This work presents the results of the implementation of a probabilistic system to model the uncertainty associated to radar rainfall (RR) estimates and the way this uncertainty propagates through the sewer system of an urban area located in the North of England. The spatial and temporal correlations of the RR errors as well as the error covariance matrix were computed to build a RR error model able to generate RR ensembles that reproduce the uncertainty associated with the measured rainfall. The results showed that the RR ensembles provide important information about the uncertainty in the rainfall measurement that can be propagated in the urban sewer system. The results showed that the measured flow peaks and flow volumes are often bounded within the uncertainty area produced by the RR ensembles. In 55% of the simulated events, the uncertainties in RR measurements can explain the uncertainties observed in the simulated flow volumes. However, there are also some events where the RR uncertainty cannot explain the whole uncertainty observed in the simulated flow volumes indicating that there are additional sources of uncertainty that must be considered such as the uncertainty in the urban drainage model structure, the uncertainty in the urban drainage model calibrated parameters, and the uncertainty in the measured sewer flows.
Modern U-Pb chronometry of meteorites: advancing to higher time resolution reveals new problems
Amelin, Y.; Connelly, J.; Zartman, R.E.; Chen, J.-H.; Gopel, C.; Neymark, L.A.
2009-01-01
In this paper, we evaluate the factors that influence the accuracy of lead (Pb)-isotopic ages of meteorites, and may possibly be responsible for inconsistencies between Pb-isotopic and extinct nuclide timescales of the early Solar System: instrumental mass fractionation and other possible analytical sources of error, presence of more than one component of non-radiogenic Pb, migration of ancient radiogenic Pb by diffusion and other mechanisms, possible heterogeneity of the isotopic composition of uranium (U), uncertainties in the decay constants of uranium isotopes, possible presence of "freshly synthesized" actinides with short half-life (e.g. 234U) in the early Solar System, possible initial disequilibrium in the uranium decay chains, and potential fractionation of radiogenic Pb isotopes and U isotopes caused by alpha-recoil and subsequent laboratory treatment. We review the use of 232Th/238U values to assist in making accurate interpretations of the U-Pb ages of meteorite components. We discuss recently published U-Pb dates of calcium-aluminum-rich inclusions (CAIs), and their apparent disagreement with the extinct nuclide dates, in the context of capability and common pitfalls in modern meteorite chronology. Finally, we discuss the requirements of meteorites that are intended to be used as the reference points in building a consistent time scale of the early Solar System, based on the combined use of the U-Pb system and extinct nuclide chronometers.
Numerical Algorithms for Precise and Efficient Orbit Propagation and Positioning
NASA Astrophysics Data System (ADS)
Bradley, Ben K.
Motivated by the growing space catalog and the demands for precise orbit determination with shorter latency for science and reconnaissance missions, this research improves the computational performance of orbit propagation through more efficient and precise numerical integration and frame transformation implementations. Propagation of satellite orbits is required for astrodynamics applications including mission design, orbit determination in support of operations and payload data analysis, and conjunction assessment. Each of these applications has somewhat different requirements in terms of accuracy, precision, latency, and computational load. This dissertation develops procedures to achieve various levels of accuracy while minimizing computational cost for diverse orbit determination applications. This is done by addressing two aspects of orbit determination: (1) numerical integration used for orbit propagation and (2) precise frame transformations necessary for force model evaluation and station coordinate rotations. This dissertation describes a recently developed method for numerical integration, dubbed Bandlimited Collocation Implicit Runge-Kutta (BLC-IRK), and compare its efficiency in propagating orbits to existing techniques commonly used in astrodynamics. The BLC-IRK scheme uses generalized Gaussian quadratures for bandlimited functions. It requires significantly fewer force function evaluations than explicit Runge-Kutta schemes and approaches the efficiency of the 8th-order Gauss-Jackson multistep method. Converting between the Geocentric Celestial Reference System (GCRS) and International Terrestrial Reference System (ITRS) is necessary for many applications in astrodynamics, such as orbit propagation, orbit determination, and analyzing geoscience data from satellite missions. This dissertation provides simplifications to the Celestial Intermediate Origin (CIO) transformation scheme and Earth orientation parameter (EOP) storage for use in positioning and orbit propagation, yielding savings in computation time and memory. Orbit propagation and position transformation simulations are analyzed to generate a complete set of recommendations for performing the ITRS/GCRS transformation for a wide range of needs, encompassing real-time on-board satellite operations and precise post-processing applications. In addition, a complete derivation of the ITRS/GCRS frame transformation time-derivative is detailed for use in velocity transformations between the GCRS and ITRS and is applied to orbit propagation in the rotating ITRS. EOP interpolation methods and ocean tide corrections are shown to impact the ITRS/GCRS transformation accuracy at the level of 5 cm and 20 cm on the surface of the Earth and at the Global Positioning System (GPS) altitude, respectively. The precession-nutation and EOP simplifications yield maximum propagation errors of approximately 2 cm and 1 m after 15 minutes and 6 hours in low-Earth orbit (LEO), respectively, while reducing computation time and memory usage. Finally, for orbit propagation in the ITRS, a simplified scheme is demonstrated that yields propagation errors under 5 cm after 15 minutes in LEO. This approach is beneficial for orbit determination based on GPS measurements. We conclude with a summary of recommendations on EOP usage and bias-precession-nutation implementations for achieving a wide range of transformation and propagation accuracies at several altitudes. This comprehensive set of recommendations allows satellite operators, astrodynamicists, and scientists to make informed decisions when choosing the best implementation for their application, balancing accuracy and computational complexity.
NASA Astrophysics Data System (ADS)
Ingham, Edwina S.; Cook, Nigel J.; Cliff, John; Ciobanu, Cristiana L.; Huddleston, Adam
2014-01-01
The common sulfide mineral pyrite is abundant throughout sedimentary uranium systems at Pepegoona, Pepegoona West and Pannikan, Lake Eyre Basin, South Australia. Combined chemical, isotopic and microstructural analysis of pyrite indicates variation in fluid composition, sulfur source and precipitation conditions during a protracted mineralization event. The results show the significant role played by pyrite as a metal scavenger and monitor of fluid changes in low-temperature hydrothermal systems. In-situ micrometer-scale sulfur isotope analyses of pyrite demonstrated broad-scale isotopic heterogeneity (δ34S = -43.9 to +32.4‰VCDT), indicative of complex, multi-faceted pyrite evolution, and sulfur derived from more than a single source. Preserved textures support this assertion and indicate a genetic model involving more than one phase of pyrite formation. Authigenic pyrite underwent prolonged evolution and recrystallization, evidenced by a genetic relationship between archetypal framboidal aggregates and pyrite euhedra. Secondary hydrothermal pyrite commonly displays hyper-enrichment of several trace elements (Mn, Co, Ni, As, Se, Mo, Sb, W and Tl) in ore-bearing horizons. Hydrothermal fluids of magmatic and meteoric origins supplied metals to the system but the geochemical signature of pyrite suggests a dominantly granitic source and also the influence of mafic rock types. Irregular variation in δ34S, coupled with oscillatory trace element zonation in secondary pyrite, is interpreted in terms of continuous variations in fluid composition and cycles of diagenetic recrystallization. A late-stage oxidizing fluid may have mobilized selenium from pre-existing pyrite. Subsequent restoration of reduced conditions within the aquifer caused ongoing pyrite re-crystallization and precipitation of selenium as native selenium. These results provide the first qualitative constraints on the formation mechanisms of the uranium deposits at Beverley North. Insights into depositional conditions and sources of both sulfide and uranium mineralization and an improved understanding of pyrite geochemistry can also underpin an effective vector for uranium exploration at Beverley North and other sedimentary systems of the Lake Eyre Basin, as well as in comparable geological environments elsewhere. Average intensity of 32S signal in counts per second × 108.Drift corrected 34S/32S prior to IMF calibration.Two-sigma propagated uncertainty on individual measurements.
Computational fluid dynamics simulation of sound propagation through a blade row.
Zhao, Lei; Qiao, Weiyang; Ji, Liang
2012-10-01
The propagation of sound waves through a blade row is investigated numerically. A wave splitting method in a two-dimensional duct with arbitrary mean flow is presented, based on which pressure amplitude of different wave mode can be extracted at an axial plane. The propagation of sound wave through a flat plate blade row has been simulated by solving the unsteady Reynolds average Navier-Stokes equations (URANS). The transmission and reflection coefficients obtained by Computational Fluid Dynamics (CFD) are compared with semi-analytical results. It indicates that the low order URANS scheme will cause large errors if the sound pressure level is lower than -100 dB (with as reference pressure the product of density, main flow velocity, and speed of sound). The CFD code has sufficient precision when solving the interaction of sound wave and blade row providing the boundary reflections have no substantial influence. Finally, the effects of flow Mach number, blade thickness, and blade turning angle on sound propagation are studied.
Attitude Representations for Kalman Filtering
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Bauer, Frank H. (Technical Monitor)
2001-01-01
The four-component quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation, it represents the attitude matrix as a homogeneous quadratic function, and its dynamic propagation equation is bilinear in the quaternion and the angular velocity. The quaternion is required to obey a unit norm constraint, though, so Kalman filters often employ a quaternion for the global attitude estimate and a three-component representation for small errors about the estimate. We consider these mixed attitude representations for both a first-order Extended Kalman filter and a second-order filter, as well for quaternion-norm-preserving attitude propagation.
NASA Astrophysics Data System (ADS)
Fort, Joaquim
2011-05-01
It is shown that Lotka-Volterra interaction terms are not appropriate to describe vertical cultural transmission. Appropriate interaction terms are derived and used to compute the effect of vertical cultural transmission on demic front propagation. They are also applied to a specific example, the Neolithic transition in Europe. In this example, it is found that the effect of vertical cultural transmission can be important (about 30%). On the other hand, simple models based on differential equations can lead to large errors (above 50%). Further physical, biophysical, and cross-disciplinary applications are outlined.
Scheme for Terminal Guidance Utilizing Acousto-Optic Correlator.
longitudinally extending acousto - optic device as index of refraction variation pattern signals. Real time signals corresponding to the scene actually being viewed...by the vehicle are propagated across the stored signals, and the results of an acousto - optic correlation are utilized to determine X and Y error
Precipitation is a key control on watershed hydrologic modelling output, with errors in rainfall propagating through subsequent stages of water quantity and quality analysis. Most watershed models incorporate precipitation data from rain gauges; higher-resolution data sources are...
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P.
2015-12-02
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help tomore » correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.« less
NASA Astrophysics Data System (ADS)
Rittersdorf, I. M.; Antonsen, T. M., Jr.; Chernin, D.; Lau, Y. Y.
2011-10-01
Random fabrication errors may have detrimental effects on the performance of traveling-wave tubes (TWTs) of all types. A new scaling law for the modification in the average small signal gain and in the output phase is derived from the third order ordinary differential equation that governs the forward wave interaction in a TWT in the presence of random error that is distributed along the axis of the tube. Analytical results compare favorably with numerical results, in both gain and phase modifications as a result of random error in the phase velocity of the slow wave circuit. Results on the effect of the reverse-propagating circuit mode will be reported. This work supported by AFOSR, ONR, L-3 Communications Electron Devices, and Northrop Grumman Corporation.
Effect of slope errors on the performance of mirrors for x-ray free electron laser applications.
Pardini, Tom; Cocco, Daniele; Hau-Riege, Stefan P
2015-12-14
In this work we point out that slope errors play only a minor role in the performance of a certain class of x-ray optics for X-ray Free Electron Laser (XFEL) applications. Using physical optics propagation simulations and the formalism of Church and Takacs [Opt. Eng. 34, 353 (1995)], we show that diffraction limited optics commonly found at XFEL facilities posses a critical spatial wavelength that makes them less sensitive to slope errors, and more sensitive to height error. Given the number of XFELs currently operating or under construction across the world, we hope that this simple observation will help to correctly define specifications for x-ray optics to be deployed at XFELs, possibly reducing the budget and the timeframe needed to complete the optical manufacturing and metrology.
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
On the use of the covariance matrix to fit correlated data
NASA Astrophysics Data System (ADS)
D'Agostini, G.
1994-07-01
Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Fujikawa, Gene; Svoboda, James S.; Lizanich, Paul J.
1990-01-01
Satellite communications links are subject to distortions which result in an amplitude versus frequency response which deviates from the ideal flat response. Such distortions result from propagation effects such as multipath fading and scintillation and from transponder and ground terminal hardware imperfections. Bit-error rate (BER) degradation resulting from several types of amplitude response distortions were measured. Additional tests measured the amount of BER improvement obtained by flattening the amplitude response of a distorted laboratory simulated satellite channel. The results of these experiments are presented.
1989-04-13
19 5.3 The Solution, BSM2 , BSM3 . ...................................... 21 6. Description of test example...are modified for the boundary conditions. The sections on the preprocessor subroutine BSM1 and the solution subroutines BSM2 , BSM3 may be skipped by...interior row j = N-1 to the solution error C5 on the second row j = IE(2) of the last block, so that P3 = C5 R31 (5.18) 20 5.3 The Solution. BSM2
A Case Study of the Impact of AIRS Temperature Retrievals on Numerical Weather Prediction
NASA Technical Reports Server (NTRS)
Reale, O.; Atlas, R.; Jusem, J. C.
2004-01-01
Large errors in numerical weather prediction are often associated with explosive cyclogenesis. Most studes focus on the under-forecasting error, i.e. cases of rapidly developing cyclones which are poorly predicted in numerical models. However, the over-forecasting error (i.e., to predict an explosively developing cyclone which does not occur in reality) is a very common error that severely impacts the forecasting skill of all models and may also present economic costs if associated with operational forecasting. Unnecessary precautions taken by marine activities can result in severe economic loss. Moreover, frequent occurrence of over-forecasting can undermine the reliance on operational weather forecasting. Therefore, it is important to understand and reduce the prdctions of extreme weather associated with explosive cyclones which do not actually develop. In this study we choose a very prominent case of over-forecasting error in the northwestern Pacific. A 960 hPa cyclone develops in less than 24 hour in the 5-day forecast, with a deepening rate of about 30 hPa in one day. The cyclone is not versed in the analyses and is thus a case of severe over-forecasting. By assimilating AIRS data, the error is largely eliminated. By following the propagation of the anomaly that generates the spurious cyclone, it is found that a small mid-tropospheric geopotential height negative anomaly over the northern part of the Indian subcontinent in the initial conditions, propagates westward, is amplified by orography, and generates a very intense jet streak in the subtropical jet stream, with consequent explosive cyclogenesis over the Pacific. The AIRS assimilation eliminates this anomaly that may have been caused by erroneous upper-air data, and represents the jet stream more correctly. The energy associated with the jet is distributed over a much broader area and as a consequence a multiple, but much more moderate cyclogenesis is observed.
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
NASA Astrophysics Data System (ADS)
Aristoff, Jeffrey M.; Horwood, Joshua T.; Poore, Aubrey B.
2014-01-01
We present a new variable-step Gauss-Legendre implicit-Runge-Kutta-based approach for orbit and uncertainty propagation, VGL-IRK, which includes adaptive step-size error control and which collectively, rather than individually, propagates nearby sigma points or states. The performance of VGL-IRK is compared to a professional (variable-step) implementation of Dormand-Prince 8(7) (DP8) and to a fixed-step, optimally-tuned, implementation of modified Chebyshev-Picard iteration (MCPI). Both nearly-circular and highly-elliptic orbits are considered using high-fidelity gravity models and realistic integration tolerances. VGL-IRK is shown to be up to eleven times faster than DP8 and up to 45 times faster than MCPI (for the same accuracy), in a serial computing environment. Parallelization of VGL-IRK and MCPI is also discussed.
NASA Technical Reports Server (NTRS)
Buglia, James J.
1989-01-01
An analysis was made of the error in the minimum altitude of a geometric ray from an orbiting spacecraft to the Sun. The sunrise and sunset errors are highly correlated and are opposite in sign. With the ephemeris generated for the SAGE 1 instrument data reduction, these errors can be as large as 200 to 350 meters (1 sigma) after 7 days of orbit propagation. The bulk of this error results from errors in the position of the orbiting spacecraft rather than errors in computing the position of the Sun. These errors, in turn, result from the discontinuities in the ephemeris tapes resulting from the orbital determination process. Data taken from the end of the definitive ephemeris tape are used to generate the predict data for the time interval covered by the next arc of the orbit determination process. The predicted data are then updated by using the tracking data. The growth of these errors is very nearly linear, with a slight nonlinearity caused by the beta angle. An approximate analytic method is given, which predicts the magnitude of the errors and their growth in time with reasonable fidelity.
Finite element modeling of light propagation in fruit under illumination of continuous-wave beam
USDA-ARS?s Scientific Manuscript database
Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...
NASA Astrophysics Data System (ADS)
Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.
2017-12-01
The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
Subresolution Displacements in Finite Difference Simulations of Ultrasound Propagation and Imaging.
Pinton, Gianmarco F
2017-03-01
Time domain finite difference simulations are used extensively to simulate wave propagation. They approximate the wave field on a discrete domain with a grid spacing that is typically on the order of a tenth of a wavelength. The smallest displacements that can be modeled by this type of simulation are thus limited to discrete values that are integer multiples of the grid spacing. This paper presents a method to represent continuous and subresolution displacements by varying the impedance of individual elements in a multielement scatterer. It is demonstrated that this method removes the limitations imposed by the discrete grid spacing by generating a continuum of displacements as measured by the backscattered signal. The method is first validated on an ideal perfect correlation case with a single scatterer. It is subsequently applied to a more complex case with a field of scatterers that model an acoustic radiation force-induced displacement used in ultrasound elasticity imaging. A custom finite difference simulation tool is used to simulate propagation from ultrasound imaging pulses in the scatterer field. These simulated transmit-receive events are then beamformed into images, which are tracked with a correlation-based algorithm to determine the displacement. A linear predictive model is developed to analytically describe the relationship between element impedance and backscattered phase shift. The error between model and simulation is λ/ 1364 , where λ is the acoustical wavelength. An iterative method is also presented that reduces the simulation error to λ/ 5556 over one iteration. The proposed technique therefore offers a computationally efficient method to model continuous subresolution displacements of a scattering medium in ultrasound imaging. This method has applications that include ultrasound elastography, blood flow, and motion tracking. This method also extends generally to finite difference simulations of wave propagation, such as electromagnetic or seismic waves.
Hoffmann, Sabine; Laurier, Dominique; Rage, Estelle; Guihenneuc, Chantal; Ancelet, Sophie
2018-01-01
Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies.
Laurier, Dominique; Rage, Estelle
2018-01-01
Exposure measurement error represents one of the most important sources of uncertainty in epidemiology. When exposure uncertainty is not or only poorly accounted for, it can lead to biased risk estimates and a distortion of the shape of the exposure-response relationship. In occupational cohort studies, the time-dependent nature of exposure and changes in the method of exposure assessment may create complex error structures. When a method of group-level exposure assessment is used, individual worker practices and the imprecision of the instrument used to measure the average exposure for a group of workers may give rise to errors that are shared between workers, within workers or both. In contrast to unshared measurement error, the effects of shared errors remain largely unknown. Moreover, exposure uncertainty and magnitude of exposure are typically highest for the earliest years of exposure. We conduct a simulation study based on exposure data of the French cohort of uranium miners to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the shape of the exposure-response curve in proportional hazards models. Our results indicate that uncertainty components shared within workers cause more bias in risk estimation and a more severe attenuation of the exposure-response relationship than unshared exposure uncertainty or exposure uncertainty shared between individuals. These findings underline the importance of careful characterisation and modeling of exposure uncertainty in observational studies. PMID:29408862
Evaluation of a scale-model experiment to investigate long-range acoustic propagation
NASA Technical Reports Server (NTRS)
Parrott, Tony L.; Mcaninch, Gerry L.; Carlberg, Ingrid A.
1987-01-01
Tests were conducted to evaluate the feasibility of using a scale-model experiment situated in an anechoic facility to investigate long-range sound propagation over ground terrain. For a nominal scale factor of 100:1, attenuations along a linear array of six microphones colinear with a continuous-wave type of sound source were measured over a wavelength range from 10 to 160 for a nominal test frequency of 10 kHz. Most tests were made for a hard model surface (plywood), but limited tests were also made for a soft model surface (plywood with felt). For grazing-incidence propagation over the hard surface, measured and predicted attenuation trends were consistent for microphone locations out to between 40 and 80 wavelengths. Beyond 80 wavelengths, significant variability was observed that was caused by disturbances in the propagation medium. Also, there was evidence of extraneous propagation-path contributions to data irregularities at more remote microphones. Sensitivity studies for the hard-surface and microphone indicated a 2.5 dB change in the relative excess attenuation for a systematic error in source and microphone elevations on the order of 1 mm. For the soft-surface model, no comparable sensitivity was found.
Finite-difference time-domain synthesis of infrasound propagation through an absorbing atmosphere.
de Groot-Hedlin, C
2008-09-01
Equations applicable to finite-difference time-domain (FDTD) computation of infrasound propagation through an absorbing atmosphere are derived and examined in this paper. It is shown that over altitudes up to 160 km, and at frequencies relevant to global infrasound propagation, i.e., 0.02-5 Hz, the acoustic absorption in dB/m varies approximately as the square of the propagation frequency plus a small constant term. A second-order differential equation is presented for an atmosphere modeled as a compressible Newtonian fluid with low shear viscosity, acted on by a small external damping force. It is shown that the solution to this equation represents pressure fluctuations with the attenuation indicated above. Increased dispersion is predicted at altitudes over 100 km at infrasound frequencies. The governing propagation equation is separated into two partial differential equations that are first order in time for FDTD implementation. A numerical analysis of errors inherent to this FDTD method shows that the attenuation term imposes additional stability constraints on the FDTD algorithm. Comparison of FDTD results for models with and without attenuation shows that the predicted transmission losses for the attenuating media agree with those computed from synthesized waveforms.
Lankford, Christopher L; Does, Mark D
2018-02-01
Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Study of a co-designed decision feedback equalizer, deinterleaver, and decoder
NASA Technical Reports Server (NTRS)
Peile, Robert E.; Welch, Loyd
1990-01-01
A technique that promises better quality data from band limited channels at lower received power in digital transmission systems is presented. Data transmission, in such systems often suffers from intersymbol interference (ISI) and noise. Two separate techniques, channel coding and equalization, have caused considerable advances in the state of communication systems and both concern themselves with removing the undesired effects of a communication channel. Equalizers mitigate the ISI whereas coding schemes are used to incorporate error-correction. In the past, most of the research in these two areas has been carried out separately. However, the individual techniques have strengths and weaknesses that are complementary in many applications: an integrated approach realizes gains in excess to that of a simple juxtaposition. Coding schemes have been successfully used in cascade with linear equalizers which in the absence of ISI provide excellent performance. However, when both ISI and the noise level are relatively high, nonlinear receivers like the decision feedback equalizer (DFE) perform better. The DFE has its drawbacks: it suffers from error propagation. The technique presented here takes advantage of interleaving to integrate the two approaches so that the error propagation in DFE can be reduced with the help of error correction provided by the decoder. The results of simulations carried out for both, binary, and non-binary, channels confirm that significant gain can be obtained by codesigning equalizer and decoder. Although, systems with time-invariant channels and simple DFE having linear filters were looked into, the technique is fairly general and can easily be modified for more sophisticated equalizers to obtain even larger gains.
Solubility testing of actinides on breathing-zone and area air samples
NASA Astrophysics Data System (ADS)
Metzger, Robert Lawrence
The solubility of inhaled radionuclides in the human lung is an important characteristic of the compounds needed to perform internal dosimetry assessments for exposed workers. A solubility testing method for uranium and several common actinides has been developed with sufficient sensitivity to allow profiles to be determined from routine breathing zone and area air samples in the workplace. Air samples are covered with a clean filter to form a filter-sample-filter sandwich which is immersed in an extracellular lung serum simulant solution. The sample is moved to a fresh beaker of the lung fluid simulant each day for one week, and then weekly until the end of the 28 day test period. The soak solutions are wet ashed with nitric acid and hydrogen peroxide to destroy the organic components of the lung simulant solution prior to extraction of the nuclides of interest directly into an extractive scintillator for subsequent counting on a Photon-Electron Rejecting Alpha Liquid Scintillation (PERALSsp°ler ) spectrometer. Solvent extraction methods utilizing the extractive scintillators have been developed for the isotopes of uranium, plutonium, and curium. The procedures normally produce an isotopic recovery greater than 95% and have been used to develop solubility profiles from air samples with 40 pCi or less of Usb3Osb8. This makes it possible to characterize solubility profiles in every section of operating facilities where airborne nuclides are found using common breathing zone air samples. The new method was evaluated by analyzing uranium compounds from two uranium mills whose product had been previously analyzed by in vitro solubility testing in the laboratory and in vivo solubility testing in rodents. The new technique compared well with the in vivo rodent solubility profiles. The method was then used to evaluate the solubility profiles in all process sections of an operating in situ uranium plant using breathing zone and area air samples collected during routine plant operations. The solubility profiles developed from this work showed excellent agreement with the results of the worker urine bioassay program at the plant and identified a significant error in existing internal dose assessments at this facility.
NASA Technical Reports Server (NTRS)
Snow, Frank; Harman, Richard; Garrick, Joseph
1988-01-01
The Gamma Ray Observatory (GRO) spacecraft needs a highly accurate attitude knowledge to achieve its mission objectives. Utilizing the fixed-head star trackers (FHSTs) for observations and gyroscopes for attitude propagation, the discrete Kalman Filter processes the attitude data to obtain an onboard accuracy of 86 arc seconds (3 sigma). A combination of linear analysis and simulations using the GRO Software Simulator (GROSS) are employed to investigate the Kalman filter for stability and the effects of corrupted observations (misalignment, noise), incomplete dynamic modeling, and nonlinear errors on Kalman filter. In the simulations, on-board attitude is compared with true attitude, the sensitivity of attitude error to model errors is graphed, and a statistical analysis is performed on the residuals of the Kalman Filter. In this paper, the modeling and sensor errors that degrade the Kalman filter solution beyond mission requirements are studied, and methods are offered to identify the source of these errors.
Hanson, Sonya M.; Ekins, Sean; Chodera, John D.
2015-01-01
All experimental assay data contains error, but the magnitude, type, and primary origin of this error is often not obvious. Here, we describe a simple set of assay modeling techniques based on the bootstrap principle that allow sources of error and bias to be simulated and propagated into assay results. We demonstrate how deceptively simple operations—such as the creation of a dilution series with a robotic liquid handler—can significantly amplify imprecision and even contribute substantially to bias. To illustrate these techniques, we review an example of how the choice of dispensing technology can impact assay measurements, and show how large contributions to discrepancies between assays can be easily understood and potentially corrected for. These simple modeling techniques—illustrated with an accompanying IPython notebook—can allow modelers to understand the expected error and bias in experimental datasets, and even help experimentalists design assays to more effectively reach accuracy and imprecision goals. PMID:26678597
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Sun, Ting; Xing, Fei; You, Zheng
2013-01-01
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527
Error Model and Compensation of Bell-Shaped Vibratory Gyro
Su, Zhong; Liu, Ning; Li, Qing
2015-01-01
A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593
Radial orbit error reduction and sea surface topography determination using satellite altimetry
NASA Technical Reports Server (NTRS)
Engelis, Theodossios
1987-01-01
A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.
Comprehensive analysis of a medication dosing error related to CPOE.
Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L
2005-01-01
This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.
Fort, Joaquim
2011-05-01
It is shown that Lotka-Volterra interaction terms are not appropriate to describe vertical cultural transmission. Appropriate interaction terms are derived and used to compute the effect of vertical cultural transmission on demic front propagation. They are also applied to a specific example, the Neolithic transition in Europe. In this example, it is found that the effect of vertical cultural transmission can be important (about 30%). On the other hand, simple models based on differential equations can lead to large errors (above 50%). Further physical, biophysical, and cross-disciplinary applications are outlined. © 2011 American Physical Society
Wan, Jiamin; Tokunaga, Tetsu K; Kim, Yongman; Wang, Zheming; Lanzirotti, Antonio; Saiz, Eduardo; Serne, R Jeffrey
2008-03-15
The accidental overfilling of waste liquid from tank BX-102 at the Hanford Site in 1951 put about 10 t of U(VI) into the vadose zone. In order to understand the dominant geochemical reactions and transport processes that occurred during the initial infiltration and to help understand current spatial distribution, we simulated the waste liquid spilling event in laboratory sediment columns using synthesized metal waste solution. We found that, as the plume propagated through sediments, pH decreased greatly (as much as 4 units) at the moving plume front. Infiltration flow rates strongly affect U behavior. Slower flow rates resulted in higher sediment-associated U concentrations, and higher flow rates (> or =5 cm/day) permitted practically unretarded U transport. Therefore, given the very high Ksat of most of Hanford formation, the low permeability zones within the sediment could have been most important in retaining high concentrations of U during initial release into the vadose zone. Massive amount of colloids, including U-colloids, formed at the plume fronts. Total U concentrations (aqueous and colloid) within plume fronts exceeded the source concentration by up to 5-fold. Uranium colloid formation and accumulation at the neutralized plume front could be one mechanism responsible for highly heterogeneous U distribution observed in the contaminated Hanford vadose zone.
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Rolland, Jannick P.; Grygotis, Emma; Wayson, Sarah; Helguera, Maria; Dalecki, Diane; Parker, Kevin J.
2018-02-01
Determining the mechanical properties of tissue such as elasticity and viscosity is fundamental for better understanding and assessment of pathological and physiological processes. Dynamic optical coherence elastography uses shear/surface wave propagation to estimate frequency-dependent wave speed and Young's modulus. However, for dispersive tissues, the displacement pulse is highly damped and distorted during propagation, diminishing the effectiveness of peak tracking approaches. The majority of methods used to determine mechanical properties assume a rheological model of tissue for the calculation of viscoelastic parameters. Further, plane wave propagation is sometimes assumed which contributes to estimation errors. To overcome these limitations, we invert a general wave propagation model which incorporates (1) the initial force shape of the excitation pulse in the space-time field, (2) wave speed dispersion, (3) wave attenuation caused by the material properties of the sample, (4) wave spreading caused by the outward cylindrical propagation of the wavefronts, and (5) the rheological-independent estimation of the dispersive medium. Experiments were conducted in elastic and viscous tissue-mimicking phantoms by producing a Gaussian push using acoustic radiation force excitation, and measuring the wave propagation using a swept-source frequency domain optical coherence tomography system. Results confirm the effectiveness of the inversion method in estimating viscoelasticity in both the viscous and elastic phantoms when compared to mechanical measurements. Finally, the viscoelastic characterization of collagen hydrogels was conducted. Preliminary results indicate a relationship between collagen concentration and viscoelastic parameters which is important for tissue engineering applications.
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bogovalov, S. V.; Kislov, V. A.; Tronin, I. V.
2016-09-01
Impact of the pulsed braking force on the axial gas circulation and gas content in centrifuges for uranium isotope separation was investigated by the method of numerical simulation. Pulsed brake of the rotating gas by the momentum source results into generation of the waves which propagate along the rotor of the centrifuge. The waves almost doubles the axial circulation flux in the working camera in compare with the case of the steady state breaking force with the same average power in the model under the consideration. Flux through the hole in the bottom baffle on 15% exceeds the flux in the stationary case for the same pressure and temperature in the model. We argue that the waves reduce the pressure in the GC on the same 15%.
Application of neural nets in structural optimization
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Hajela, Prabhat
1993-01-01
The biological motivation for Artificial Neural Net developments is briefly discussed, and the most popular paradigm, the feedforward supervised learning net with error back propagation training algorithm, is introduced. Possible approaches for utilization in structural optimization is illustrated through simple examples. Other currently ongoing developments for application in structural mechanics are also mentioned.
Dual Accelerometer Usage Strategy for Onboard Space Navigation
NASA Technical Reports Server (NTRS)
Zanetti, Renato; D'Souza, Chris
2012-01-01
This work introduces a dual accelerometer usage strategy for onboard space navigation. In the proposed algorithm the accelerometer is used to propagate the state when its value exceeds a threshold and it is used to estimate its errors otherwise. Numerical examples and comparison to other accelerometer usage schemes are presented to validate the proposed approach.
Fitting program for linear regressions according to Mahon (1996)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, Reto G.
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
The ability to effectively use remotely sensed data for environmental spatial analysis is dependent on understanding the underlying procedures and associated variances attributed to the data processing and image analysis technique. Equally important, also, is understanding the er...
USDA-ARS?s Scientific Manuscript database
Spatially-resolved spectroscopy provides a means for measuring the optical properties of biological tissues, based on analytical solutions to diffusion approximation for semi-infinite media under the normal illumination of infinitely small size light beam. The method is, however, prone to error in m...
NASA Astrophysics Data System (ADS)
Oyler, Benjamin L.; Khan, Mohd M.; Smith, Donald F.; Harberts, Erin M.; Kilgour, David P. A.; Ernst, Robert K.; Cross, Alan S.; Goodlett, David R.
2018-04-01
In the preceding article "Top Down Tandem Mass Spectrometric Analysis of a Chemically Modified Rough-Type Lipopolysaccharide Vaccine Candidate" by Oyler et al., an error in the J5 E. coli LPS chemical structure (Figs. 2 and 4) was introduced and propagated into the final revision.
Yousefi, Masoud; Golmohammady, Shole; Mashal, Ahmad; Kashani, Fatemeh Dabbagh
2015-11-01
In this paper, on the basis of the extended Huygens-Fresnel principle, a semianalytical expression for describing on-axis scintillation index of a partially coherent flat-topped (PCFT) laser beam of weak to moderate oceanic turbulence is derived; consequently, by using the log-normal intensity probability density function, the bit error rate (BER) is evaluated. The effects of source factors (such as wavelength, order of flatness, and beam width) and turbulent ocean parameters (such as Kolmogorov microscale, relative strengths of temperature and salinity fluctuations, rate of dissipation of the mean squared temperature, and rate of dissipation of the turbulent kinetic energy per unit mass of fluid) on propagation behavior of scintillation index, and, hence, on BER, are studied in detail. Results indicate that, in comparison with a Gaussian beam, a PCFT laser beam with a higher order of flatness is found to have lower scintillations. In addition, the scintillation index and BER are most affected when salinity fluctuations in the ocean dominate temperature fluctuations.
Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai
2015-08-10
Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.
Vemić, Ana; Rakić, Tijana; Malenović, Anđelija; Medenica, Mirjana
2015-01-01
The aim of this paper is to present a development of liquid chromatographic method when chaotropic salts are used as mobile phase additives following the QbD principles. The effect of critical process parameters (column chemistry, salt nature and concentration, acetonitrile content and column temperature) on the critical quality attributes (retention of the first and last eluting peak and separation of the critical peak pairs) was studied applying the design of experiments-design space methodology (DoE-DS). D-optimal design is chosen in order to simultaneously examine both categorical and numerical factors in minimal number of experiments. Two ways for the achievement of quality assurance were performed and compared. Namely, the uncertainty originating from the models was assessed by Monte Carlo simulations propagating the error equal to the variance of the model residuals and propagating the error originating from the model coefficients' calculation. The baseline separation of pramipexole and its five impurities is achieved fulfilling all the required criteria while the method validation proved its reliability. Copyright © 2014 Elsevier B.V. All rights reserved.
Stable lattice Boltzmann model for Maxwell equations in media
NASA Astrophysics Data System (ADS)
Hauser, A.; Verhey, J. L.
2017-12-01
The present work shows a method for stable simulations via the lattice Boltzmann (LB) model for electromagnetic waves (EM) transiting homogeneous media. LB models for such media were already presented in the literature, but they suffer from numerical instability when the media transitions are sharp. We use one of these models in the limit of pure vacuum derived from Liu and Yan [Appl. Math. Model. 38, 1710 (2014), 10.1016/j.apm.2013.09.009] and apply an extension that treats the effects of polarization and magnetization separately. We show simulations of simple examples in which EM waves travel into media to quantify error scaling, stability, accuracy, and time scaling. For conductive media, we use the Strang splitting and check the simulations accuracy at the example of the skin effect. Like pure EM propagation, the error for the static limits, which are constructed with a current density added in a first-order scheme, can be less than 1 % . The presented method is an easily implemented alternative for the stabilization of simulation for EM waves propagating in spatially complex structured media properties and arbitrary transitions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flechsig, U.; Follath, R.; Reiche, S.
PHASE is a software tool for physical optics simulation based on the stationary phase approximation method. The code is under continuous development since about 20 years and has been used for instance for fundamental studies and ray tracing of various beamlines at the Swiss Light Source. Along with the planning for SwissFEL a new hard X-ray free electron laser under construction, new features have been added to permit practical performance predictions including diffraction effects which emerge with the fully coherent source. We present the application of the package on the example of the ARAMIS 1 beamline at SwissFEL. The X-raymore » pulse calculated with GENESIS and given as an electrical field distribution has been propagated through the beamline to the sample position. We demonstrate the new features of PHASE like the treatment of measured figure errors, apertures and coatings of the mirrors and the application of Fourier optics propagators for free space propagation.« less
A study of photon propagation in free-space based on hybrid radiosity-radiance theorem.
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Liang, Jimin; Wang, Lin; Yang, Da'an; Garofalakis, Anikitos; Ripoll, Jorge; Tian, Jie
2009-08-31
Noncontact optical imaging has attracted increasing attention in recent years due to its significant advantages on detection sensitivity, spatial resolution, image quality and system simplicity compared with contact measurement. However, photon transport simulation in free-space is still an extremely challenging topic for the complexity of the optical system. For this purpose, this paper proposes an analytical model for photon propagation in free-space based on hybrid radiosity-radiance theorem (HRRT). It combines Lambert's cosine law and the radiance theorem to handle the influence of the complicated lens and to simplify the photon transport process in the optical system. The performance of the proposed model is evaluated and validated with numerical simulations and physical experiments. Qualitative comparison results of flux distribution at the detector are presented. In particular, error analysis demonstrates the feasibility and potential of the proposed model for simulating photon propagation in free-space.
A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems
Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua
2013-01-01
A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597
Altimeter error sources at the 10-cm performance level
NASA Technical Reports Server (NTRS)
Martin, C. F.
1977-01-01
Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.
Managing Errors to Reduce Accidents in High Consequence Networked Information Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganter, J.H.
1999-02-01
Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less
NASA Astrophysics Data System (ADS)
Gautam, Ghaneshwar; Surmick, David M.; Parigger, Christian G.
2015-07-01
In this letter, we present a brief comment regarding the recently published paper by Ivković et al., J Quant Spectrosc Radiat Transf 2015;154:1-8. Reference is made to previous experimental results to indicate that self absorption must have occurred; however, when carefully considering error propagation, both widths and peak-separation predict electron densities within the error margins. Yet the diagnosis method and the presented details on the use of the hydrogen beta peak separation are viewed as a welcomed contribution in studies of laser-induced plasma.
Differential phase measurements of D-region partial reflections
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Sechrist, C. F., Jr.
1972-01-01
Differential phase partial reflection measurements were used to deduce D region electron density profiles. The phase difference was measured by taking sums and differences of amplitudes received on an array of crossed dipoles. The reflection model used was derived from Fresnel reflection theory. Seven profiles obtained over the period from 13 October 1971 to 5 November 1971 are presented, along with the results from simultaneous measurements of differential absorption. Some possible sources of error and error propagation are discussed. A collision frequency profile was deduced from the electron concentration calculated from differential phase and differential absorption.
Vector space methods of photometric analysis - Applications to O stars and interstellar reddening
NASA Technical Reports Server (NTRS)
Massa, D.; Lillie, C. F.
1978-01-01
A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.
Quantum Graphical Models and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leifer, M.S.; Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo Ont., N2L 2Y5; Poulin, D.
Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markovmore » Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley-Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.« less
Multi-exemplar affinity propagation.
Wang, Chang-Dong; Lai, Jian-Huang; Suen, Ching Y; Zhu, Jun-Yong
2013-09-01
The affinity propagation (AP) clustering algorithm has received much attention in the past few years. AP is appealing because it is efficient, insensitive to initialization, and it produces clusters at a lower error rate than other exemplar-based methods. However, its single-exemplar model becomes inadequate when applied to model multisubclasses in some situations such as scene analysis and character recognition. To remedy this deficiency, we have extended the single-exemplar model to a multi-exemplar one to create a new multi-exemplar affinity propagation (MEAP) algorithm. This new model automatically determines the number of exemplars in each cluster associated with a super exemplar to approximate the subclasses in the category. Solving the model is NP-hard and we tackle it with the max-sum belief propagation to produce neighborhood maximum clusters, with no need to specify beforehand the number of clusters, multi-exemplars, and superexemplars. Also, utilizing the sparsity in the data, we are able to reduce substantially the computational time and storage. Experimental studies have shown MEAP's significant improvements over other algorithms on unsupervised image categorization and the clustering of handwritten digits.
Morosi, J; Berti, N; Akrout, A; Picozzi, A; Guasoni, M; Fatome, J
2018-01-22
In this manuscript, we experimentally and numerically investigate the chaotic dynamics of the state-of-polarization in a nonlinear optical fiber due to the cross-interaction between an incident signal and its intense backward replica generated at the fiber-end through an amplified reflective delayed loop. Thanks to the cross-polarization interaction between the two-delayed counter-propagating waves, the output polarization exhibits fast temporal chaotic dynamics, which enable a powerful scrambling process with moving speeds up to 600-krad/s. The performance of this all-optical scrambler was then evaluated on a 10-Gbit/s On/Off Keying telecom signal achieving an error-free transmission. We also describe how these temporal and chaotic polarization fluctuations can be exploited as an all-optical random number generator. To this aim, a billion-bit sequence was experimentally generated and successfully confronted to the dieharder benchmarking statistic tools. Our experimental analysis are supported by numerical simulations based on the resolution of counter-propagating coupled nonlinear propagation equations that confirm the observed behaviors.
Information recovery in propagation-based imaging with decoherence effects
NASA Astrophysics Data System (ADS)
Froese, Heinrich; Lötgering, Lars; Wilhein, Thomas
2017-05-01
During the past decades the optical imaging community witnessed a rapid emergence of novel imaging modalities such as coherent diffraction imaging (CDI), propagation-based imaging and ptychography. These methods have been demonstrated to recover complex-valued scalar wave fields from redundant data without the need for refractive or diffractive optical elements. This renders these techniques suitable for imaging experiments with EUV and x-ray radiation, where the use of lenses is complicated by fabrication, photon efficiency and cost. However, decoherence effects can have detrimental effects on the reconstruction quality of the numerical algorithms involved. Here we demonstrate propagation-based optical phase retrieval from multiple near-field intensities with decoherence effects such as partially coherent illumination, detector point spread, binning and position uncertainties of the detector. Methods for overcoming these systematic experimental errors - based on the decomposition of the data into mutually incoherent modes - are proposed and numerically tested. We believe that the results presented here open up novel algorithmic methods to accelerate detector readout rates and enable subpixel resolution in propagation-based phase retrieval. Further the techniques are straightforward to be extended to methods such as CDI, ptychography and holography.
NASA Astrophysics Data System (ADS)
Lugaz, N.; Kintner, P.
2013-07-01
The Fixed-Φ (FΦ) and Harmonic Mean (HM) fitting methods are two methods to determine the "average" direction and velocity of coronal mass ejections (CMEs) from time-elongation tracks produced by Heliospheric Imagers (HIs), such as the HIs onboard the STEREO spacecraft. Both methods assume a constant velocity in their descriptions of the time-elongation profiles of CMEs, which are used to fit the observed time-elongation data. Here, we analyze the effect of aerodynamic drag on CMEs propagating through interplanetary space, and how this drag affects the result of the FΦ and HM fitting methods. A simple drag model is used to analytically construct time-elongation profiles which are then fitted with the two methods. It is found that higher angles and velocities give rise to greater error in both methods, reaching errors in the direction of propagation of up to 15∘ and 30∘ for the FΦ and HM fitting methods, respectively. This is due to the physical accelerations of the CMEs being interpreted as geometrical accelerations by the fitting methods. Because of the geometrical definition of the HM fitting method, it is more affected by the acceleration than the FΦ fitting method. Overall, we find that both techniques overestimate the initial (and final) velocity and direction for fast CMEs propagating beyond 90∘ from the Sun-spacecraft line, meaning that arrival times at 1 AU would be predicted early (by up to 12 hours). We also find that the direction and arrival time of a wide and decelerating CME can be better reproduced by the FΦ due to the cancelation of two errors: neglecting the CME width and neglecting the CME deceleration. Overall, the inaccuracies of the two fitting methods are expected to play an important role in the prediction of CME hit and arrival times as we head towards solar maximum and the STEREO spacecraft further move behind the Sun.
NASA Technical Reports Server (NTRS)
Carson, William; Lindemuth, Kathleen; Mich, John; White, K. Preston; Parker, Peter A.
2009-01-01
Probabilistic engineering design enhances safety and reduces costs by incorporating risk assessment directly into the design process. In this paper, we assess the format of the quantitative metrics for the vehicle which will replace the Space Shuttle, the Ares I rocket. Specifically, we address the metrics for in-flight measurement error in the vector position of the motor nozzle, dictated by limits on guidance, navigation, and control systems. Analyses include the propagation of error from measured to derived parameters, the time-series of dwell points for the duty cycle during static tests, and commanded versus achieved yaw angle during tests. Based on these analyses, we recommend a probabilistic template for specifying the maximum error in angular displacement and radial offset for the nozzle-position vector. Criteria for evaluating individual tests and risky decisions also are developed.
Eliminating time dispersion from seismic wave modeling
NASA Astrophysics Data System (ADS)
Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik
2018-04-01
We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.
[Application of wavelet neural networks model to forecast incidence of syphilis].
Zhou, Xian-Feng; Feng, Zi-Jian; Yang, Wei-Zhong; Li, Xiao-Song
2011-07-01
To apply Wavelet Neural Networks (WNN) model to forecast incidence of Syphilis. Back Propagation Neural Network (BPNN) and WNN were developed based on the monthly incidence of Syphilis in Sichuan province from 2004 to 2008. The accuracy of forecast was compared between the two models. In the training approximation, the mean absolute error (MAE), rooted mean square error (RMSE) and mean absolute percentage error (MAPE) were 0.0719, 0.0862 and 11.52% respectively for WNN, and 0.0892, 0.1183 and 14.87% respectively for BPNN. The three indexes for generalization of models were 0.0497, 0.0513 and 4.60% for WNN, and 0.0816, 0.1119 and 7.25% for BPNN. WNN is a better model for short-term forecasting of Syphilis.
Heterogenic Solid Biofuel Sampling Methodology and Uncertainty Associated with Prompt Analysis
Pazó, Jose A.; Granada, Enrique; Saavedra, Ángeles; Patiño, David; Collazo, Joaquín
2010-01-01
Accurate determination of the properties of biomass is of particular interest in studies on biomass combustion or cofiring. The aim of this paper is to develop a methodology for prompt analysis of heterogeneous solid fuels with an acceptable degree of accuracy. Special care must be taken with the sampling procedure to achieve an acceptable degree of error and low statistical uncertainty. A sampling and error determination methodology for prompt analysis is presented and validated. Two approaches for the propagation of errors are also given and some comparisons are made in order to determine which may be better in this context. Results show in general low, acceptable levels of uncertainty, demonstrating that the samples obtained in the process are representative of the overall fuel composition. PMID:20559506
A modified error correction protocol for CCITT signalling system no. 7 on satellite links
NASA Astrophysics Data System (ADS)
Kreuer, Dieter; Quernheim, Ulrich
1991-10-01
Comite Consultatif International des Telegraphe et Telephone (CCITT) Signalling System No. 7 (SS7) provides a level 2 error correction protocol particularly suited for links with propagation delays higher than 15 ms. Not being originally designed for satellite links, however, the so called Preventive Cyclic Retransmission (PCR) Method only performs well on satellite channels when traffic is low. A modified level 2 error control protocol, termed Fix Delay Retransmission (FDR) method is suggested which performs better at high loads, thus providing a more efficient use of the limited carrier capacity. Both the PCR and the FDR methods are investigated by means of simulation and results concerning throughput, queueing delay, and system delay, respectively. The FDR method exhibits higher capacity and shorter delay than the PCR method.
Luther, Stefan; Singh, Rupinder; Gilmour, Robert F.
2010-01-01
The pattern of action potential propagation during various tachyarrhythmias is strongly suspected to be composed of multiple re-entrant waves, but has never been imaged in detail deep within myocardial tissue. An understanding of the nature and dynamics of these waves is important in the development of appropriate electrical or pharmacological treatments for these pathological conditions. We propose a new imaging modality that uses ultrasound to visualize the patterns of propagation of these waves through the mechanical deformations they induce. The new method would have the distinct advantage of being able to visualize these waves deep within cardiac tissue. In this article, we describe one step that would be necessary in this imaging process—the conversion of these deformations into the action potential induced active stresses that produced them. We demonstrate that, because the active stress induced by an action potential is, to a good approximation, only nonzero along the local fiber direction, the problem in our case is actually overdetermined, allowing us to obtain a complete solution. Use of two- rather than three-dimensional displacement data, noise in these displacements, and/or errors in the measurements of the fiber orientations all produce substantial but acceptable errors in the solution. We conclude that the reconstruction of action potential-induced active stress from the deformation it causes appears possible, and that, therefore, the path is open to the development of the new imaging modality. PMID:20499183
Vafaeian, B; Le, L H; Tran, T N H T; El-Rich, M; El-Bialy, T; Adeeb, S
2016-05-01
The present study investigated the accuracy of micro-scale finite element modeling for simulating broadband ultrasound propagation in water-saturated trabecular bone-mimicking phantoms. To this end, five commercially manufactured aluminum foam samples as trabecular bone-mimicking phantoms were utilized for ultrasonic immersion through-transmission experiments. Based on micro-computed tomography images of the same physical samples, three-dimensional high-resolution computational samples were generated to be implemented in the micro-scale finite element models. The finite element models employed the standard Galerkin finite element method (FEM) in time domain to simulate the ultrasonic experiments. The numerical simulations did not include energy dissipative mechanisms of ultrasonic attenuation; however, they expectedly simulated reflection, refraction, scattering, and wave mode conversion. The accuracy of the finite element simulations were evaluated by comparing the simulated ultrasonic attenuation and velocity with the experimental data. The maximum and the average relative errors between the experimental and simulated attenuation coefficients in the frequency range of 0.6-1.4 MHz were 17% and 6% respectively. Moreover, the simulations closely predicted the time-of-flight based velocities and the phase velocities of ultrasound with maximum relative errors of 20 m/s and 11 m/s respectively. The results of this study strongly suggest that micro-scale finite element modeling can effectively simulate broadband ultrasound propagation in water-saturated trabecular bone-mimicking structures. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Yao, Jianing; Chu, Ying-Ju; Meemon, Panomsak; Rolland, Jannick P.; Parker, Kevin J.
2016-03-01
Optical Coherence Elastography (OCE) is a widely investigated noninvasive technique for estimating the mechanical properties of tissue. In particular, vibrational OCE methods aim to estimate the shear wave velocity generated by an external stimulus in order to calculate the elastic modulus of tissue. In this study, we compare the performance of five acquisition and processing techniques for estimating the shear wave speed in simulations and experiments using tissue-mimicking phantoms. Accuracy, contrast-to-noise ratio, and resolution are measured for all cases. The first two techniques make the use of one piezoelectric actuator for generating a continuous shear wave propagation (SWP) and a tone-burst propagation (TBP) of 400 Hz over the gelatin phantom. The other techniques make use of one additional actuator located on the opposite side of the region of interest in order to create an interference pattern. When both actuators have the same frequency, a standing wave (SW) pattern is generated. Otherwise, when there is a frequency difference df between both actuators, a crawling wave (CrW) pattern is generated and propagates with less speed than a shear wave, which makes it suitable for being detected by the 2D cross-sectional OCE imaging. If df is not small compared to the operational frequency, the CrW travels faster and a sampled version of it (SCrW) is acquired by the system. Preliminary results suggest that TBP (error < 4.1%) and SWP (error < 6%) techniques are more accurate when compared to mechanical measurement test results.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields
Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne
2015-01-01
Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, J.E.; Bourret, S.C.; Krick, M.S.
1996-09-01
Neutron coincidence counting (NCC) is used routinely around the world for nondestructive mass assay of uranium and plutonium in many forms, including waste. Compared with other methods, NCC is generally the most flexible, economic, and rapid. Many applications of NCC would benefit from a reduction in counting time required for a fixed random error. We have developed and tested the first prototype of a dual- gated, shift-register-based electronics unit that offers the potential of decreased measurement time for all passive and active NCC applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, J.E.; Bourret, S.C.; Krick, M.S.
1996-12-31
Neutron coincidence counting (NCC) is used routinely around the world for nondestructive mass assay of uranium and plutonium in many forms, including waste. Compared with other methods, NCC is generally the most flexible, economic, and rapid. Many applications of NCC would benefit from a reduction in counting time required for a fixed random error. The authors have developed and tested the first prototype of a dual-gated, shift-register-based electronics unit that offers the potential of decreased measurement time for all passive and active NCC applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Carl; Rahman, Mahmudur; Johnson, Ann
2013-07-01
The U.S. Army Corps of Engineers (USACE) - Philadelphia District is conducting an environmental restoration at the DuPont Chambers Works in Deepwater, New Jersey under the Formerly Utilized Sites Remedial Action Program (FUSRAP). Discrete locations are contaminated with natural uranium, thorium-230 and radium-226. The USACE is proposing a preferred remedial alternative consisting of excavation and offsite disposal to address soil contamination followed by monitored natural attenuation to address residual groundwater contamination. Methods were developed to quantify the error associated with contaminant volume estimates and use mass balance calculations of the uranium plume to estimate the removal efficiency of the proposedmore » alternative. During the remedial investigation, the USACE collected approximately 500 soil samples at various depths. As the first step of contaminant mass estimation, soil analytical data was segmented into several depth intervals. Second, using contouring software, analytical data for each depth interval was contoured to determine lateral extent of contamination. Six different contouring algorithms were used to generate alternative interpretations of the lateral extent of the soil contamination. Finally, geographical information system software was used to produce a three dimensional model in order to present both lateral and vertical extent of the soil contamination and to estimate the volume of impacted soil for each depth interval. The average soil volume from all six contouring methods was used to determine the estimated volume of impacted soil. This method also allowed an estimate of a standard deviation of the waste volume estimate. It was determined that the margin of error for the method was plus or minus 17% of the waste volume, which is within the acceptable construction contingency for cost estimation. USACE collected approximately 190 groundwater samples from 40 monitor wells. It is expected that excavation and disposal of contaminated soil will remove the contaminant source zone and significantly reduce contaminant concentrations in groundwater. To test this assumption, a mass balance evaluation was performed to estimate the amount of dissolved uranium that would remain in the groundwater after completion of soil excavation. As part of this evaluation, average groundwater concentrations for the pre-excavation and post-excavation aquifer plume area were calculated to determine the percentage of plume removed during excavation activities. In addition, the volume of the plume removed during excavation dewatering was estimated. The results of the evaluation show that approximately 98% of the aqueous uranium would be removed during the excavation phase. The USACE expects that residual levels of contamination will remain in groundwater after excavation of soil but at levels well suited for the selection of excavation combined with monitored natural attenuation as a preferred alternative. (authors)« less
NASA Astrophysics Data System (ADS)
Huang, J.; Zhou, Z.; Gong, Y.; Lundstrom, C.; Huang, F.
2015-12-01
Rock weathering and soil formation in the critical zone are important for material cycle from the solid Earth to superficial system. Laterite is a major type of soil in South China forming at hot-humid climate, which has strong effect on the global uranium cycle. Uranium is closely related to the environmental redox condition because U is stable at U(Ⅳ) in anoxic condition and U(Ⅵ) as soluble uranyl ion (UO22+) under oxic circumstance. In order to understand the behavior of U isotopes during crust weathering, here we report uranium isotopic compositions of soil and base rock samples from a laterite profile originated from extreme weathering of basalt in Guangdong, South China. The uranium isotopic data were measured on a Nu Plasma MC-ICP-MS at the University of Illinois at Urbana-Champaign using the double spike method. The δ238U of BCR-1 is -0.29±0.03‰ (relative to the international standard CRM-112A), corresponding to a 238U/235U ratio of 137.911±0.004. Our result of BCR-1 agrees with previous analyses (e.g., -0.28‰ in Weyer et al. 2008) [1]. U contents of the laterite profile decrease from 1.9 ppm to 0.9 ppm with depth, and peak at 160 - 170 cm (2.3 ppm), much higher than the U content of base rocks (~0.5 ppm). In contrary, U/Th of laterites is lower than that of base rock (0.27) except the peak at the depth of 160-170 cm (0.38), indicating significant U loss during weathering. Notably, U isotope compositions of soils show a small variation from -0.38 to -0.28‰, consistent with the base rock within analytical error (0.05‰ to 0.08‰, 2sd). Such small variation can be explained by a "rind effect" (Wang et al., 2015) [2], by which U(Ⅳ) can be completely oxidized to U(VI) layer by layer during basalt weathering by dissolved oxygen. Therefore, our study indicates that U loss during basalt weathering at the hot-humid climate does not change U isotope composition of superficial water system. [1] Weyer S. et al. (2008) Natural fractionation of 238U/235U. GCA 72,345-359 [2] Wang X. et al. (2015) Isotope fractionation during oxidation of tetravalent uranium by dissolved oxygen. GCA 150, 160-170
TRMM On-Orbit Performance Re-Accessed After Control Change
NASA Technical Reports Server (NTRS)
Bilanow, Steve
2006-01-01
The Tropical Rainfall Measuring Mission (TRMM) spacecraft, a joint mission between the U.S. and Japan, launched onboard an HI1 rocket on November 27,1997 and transitioned in August, 2001 from an average operating altitude of 350 kilometers to 402.5 kilometers. Due to problems using the Earth Sensor Assembly (ESA) at the higher altitude, TRMM switched to a backup attitude control mode. Prior to the orbit boost TRMM controlled pitch and roll to the local vertical using ESA measurements while using gyro data to propagate yaw attitude between yaw updates from the Sun sensors. After the orbit boost, a Kalman filter used 3-axis gyro data with Sun sensor and magnetometers to estimate onboard attitude. While originally intended to meet a degraded attitude accuracy of 0.7 degrees, the new control mode met the original 0.2 degree attitude accuracy requirement after improving onboard ephemeris prediction and adjusting the magnetometer calibration onboard. Independent roll attitude checks using a science instrument, the Precipitation Radar (PR) which was built in Japan, provided a novel insight into the pointing performance. The PR data helped identify the pointing errors after the orbit boost, track the performance improvements, and show subtle effects from ephemeris errors and gyro bias errors. It also helped identify average bias trends throughout the mission. Roll errors tracked by the PR from sample orbits pre-boost and post-boost are shown in Figure 1. Prior to the orbit boost the largest attitude errors were due to occasional interference in the ESA. These errors were sometime larger than 0.2 degrees in pitch and roll, but usually less, as estimated from a comprehensive review of the attitude excursions using gyro data. Sudden jumps in the onboard roll show up as spikes in the reported attitude since the control responds within tens of seconds to null the pointing error. The PR estimated roll tracks well with an estimate of the roll history propagated using gyro data. After the orbit boost, the attitude errors shown by the PR roll have a smooth sine-wave type signal because of the way that attitude errors propagate with the use of gyro data. Yaw errors couple at orbit period to roll with '/4 orbit lag. By tracking the amplitude, phase, and bias of the sinusoidal PR roll error signal, it was shown that the average pitch rotation axis tends to be offset from orbit normal in a direction perpendicular to the Sun direction, as shown in Figure 2 for a 200 day period following the orbit boost. This is a result of the higher accuracy and stability of the Sun sensor measurements relative to the magnetometer measurements used in the Kalman filter. In November, 2001 a magnetometer calibration adjustment was uploaded which improved the pointing performance, keeping the roll and yaw amplitudes within about 0.1 degrees. After the boost, onboard ephemeris errors had a direct effect on the pitch pointing, being used to compute the Earth pointing reference frame. Improvements after the orbit boost have kept the the onboard ephemeris errors generally below 20 kilometers. Ephemeris errors have secondary effects on roll and yaw, especially during high beta angle when pitch effects can couple into roll and yaw. This is illustrated in figure 3. The onboard roll bias trends as measured by PR data show correlations with the Kalman filter's gyro bias error. This particularly shows up after yaw turns (every 2 to 4 weeks) as shown in Figure 3, when a slight roll bias is observed while the onboard computed gyro biases settle to new values. As for longer term trends, the PR data shows that the roll bias was influenced by Earth horizon radiance effects prior to the boost, changing values at yaw turns, and indicated a long term drift as shown in Figure 4. After the boost, the bias variations were smaller and showed some possible correlation with solar beta angle, probably due to sun sensor misalignment effects.
Impact of Orbit Position Errors on Future Satellite Gravity Models
NASA Astrophysics Data System (ADS)
Encarnacao, J.; Ditmar, P.; Klees, R.
2015-12-01
We present the results of a study of the impact of orbit positioning noise (OPN) caused by incomplete knowledge of the Earth's gravity field on gravity models estimated from satellite gravity data. The OPN is simulated as the difference between two sets of orbits integrated on the basis of different static gravity field models. The OPN is propagated into ll-SST data, here computed as averaged inter-satellite accelerations projected onto the Line of Sight (LoS) vector between the two satellites. We consider the cartwheel formation (CF), pendulum formation (PF), and trailing formation (TF) as they produce a different dominant orientation of the LoS vector. Given the polar orbits of the formations, the LoS vector is mainly aligned with the North-South direction in the TF, with the East-West direction in the PF (i.e. no along-track offset), and contains a radial component in the CF. An analytical analysis predicts that the CF suffers from a very high sensitivity to the OPN. This is a fundamental characteristic of this formation, which results from the amplification of this noise by diagonal components of the gravity gradient tensor (defined in the local frame) during the propagation into satellite gravity data. In contrast, the OPN in the data from PF and TF is only scaled by off-diagonal gravity gradient components, which are much smaller than the diagonal tensor components. A numerical analysis shows that the effect of the OPN is similar in the data collected by the TF and the PF. The amplification of the OPN errors for the CF leads to errors in the gravity model that are three orders of magnitude larger than those in case of the PF. This means that any implementation of the CF will most likely produce data with relatively low quality since this error dominates the error budget, especially at low frequencies. This is particularly critical for future gravimetric missions that will be equipped with highly accurate ranging sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein
2014-11-15
In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less
Optimized Finite-Difference Coefficients for Hydroacoustic Modeling
NASA Astrophysics Data System (ADS)
Preston, L. A.
2014-12-01
Responsible utilization of marine renewable energy sources through the use of current energy converter (CEC) and wave energy converter (WEC) devices requires an understanding of the noise generation and propagation from these systems in the marine environment. Acoustic noise produced by rotating turbines, for example, could adversely affect marine animals and human-related marine activities if not properly understood and mitigated. We are utilizing a 3-D finite-difference acoustic simulation code developed at Sandia that can accurately propagate noise in the complex bathymetry in the near-shore to open ocean environment. As part of our efforts to improve computation efficiency in the large, high-resolution domains required in this project, we investigate the effects of using optimized finite-difference coefficients on the accuracy of the simulations. We compare accuracy and runtime of various finite-difference coefficients optimized via criteria such as maximum numerical phase speed error, maximum numerical group speed error, and L-1 and L-2 norms of weighted numerical group and phase speed errors over a given spectral bandwidth. We find that those coefficients optimized for L-1 and L-2 norms are superior in accuracy to those based on maximal error and can produce runtimes of 10% of the baseline case, which uses Taylor Series finite-difference coefficients at the Courant time step limit. We will present comparisons of the results for the various cases evaluated as well as recommendations for utilization of the cases studied. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Novel theory for propagation of tilted Gaussian beam through aligned optical system
NASA Astrophysics Data System (ADS)
Xia, Lei; Gao, Yunguo; Han, Xudong
2017-03-01
A novel theory for tilted beam propagation is established in this paper. By setting the propagation direction of the tilted beam as the new optical axis, we establish a virtual optical system that is aligned with the new optical axis. Within the first order approximation of the tilt and off-axis, the propagation of the tilted beam is studied in the virtual system instead of the actual system. To achieve more accurate optical field distributions of tilted Gaussian beams, a complete diffraction integral for a misaligned optical system is derived by using the matrix theory with angular momentums. The theory demonstrates that a tilted TEM00 Gaussian beam passing through an aligned optical element transforms into a decentered Gaussian beam along the propagation direction. The deviations between the peak intensity axis of the decentered Gaussian beam and the new optical axis have linear relationships with the misalignments in the virtual system. ZEMAX simulation of a tilted beam through a thick lens exposed to air shows that the errors between the simulation results and theoretical calculations of the position deviations are less than 2‰ when the misalignments εx, εy, εx', εy' are in the range of [-0.5, 0.5] mm and [-0.5, 0.5]°.
Sea-air boundary meteorological sensor
NASA Astrophysics Data System (ADS)
Barbosa, Jose G.
2015-05-01
The atmospheric environment can significantly affect radio frequency and optical propagation. In the RF spectrum refraction and ducting can degrade or enhance communications and radar coverage. Platforms in or beneath refractive boundaries can exploit the benefits or suffer the effects of the atmospheric boundary layers. Evaporative ducts and surface-base ducts are of most concern for ocean surface platforms and evaporative ducts are almost always present along the sea-air interface. The atmospheric environment also degrades electro-optical systems resolution and visibility. The atmospheric environment has been proven not to be uniform and under heterogeneous conditions substantial propagation errors may be present for large distances from homogeneous models. An accurate and portable atmospheric sensor to profile the vertical index of refraction is needed for mission planning, post analysis, and in-situ performance assessment. The meteorological instrument used in conjunction with a radio frequency and electro-optical propagation prediction tactical decision aid tool would give military platforms, in real time, the ability to make assessments on communication systems propagation ranges, radar detection and vulnerability ranges, satellite communications vulnerability, laser range finder performance, and imaging system performance predictions. Raman lidar has been shown to be capable of measuring the required atmospheric parameters needed to profile the atmospheric environment. The atmospheric profile could then be used as input to a tactical decision aid tool to make propagation predictions.
31 CFR 540.317 - Uranium feed; natural uranium feed.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Uranium feed; natural uranium feed...) AGREEMENT ASSETS CONTROL REGULATIONS General Definitions § 540.317 Uranium feed; natural uranium feed. The term uranium feed or natural uranium feed means natural uranium in the form of UF6 suitable for uranium...
31 CFR 540.317 - Uranium feed; natural uranium feed.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Uranium feed; natural uranium feed...) AGREEMENT ASSETS CONTROL REGULATIONS General Definitions § 540.317 Uranium feed; natural uranium feed. The term uranium feed or natural uranium feed means natural uranium in the form of UF6 suitable for uranium...
Process for continuous production of metallic uranium and uranium alloys
Hayden, H.W. Jr.; Horton, J.A.; Elliott, G.R.B.
1995-06-06
A method is described for forming metallic uranium, or a uranium alloy, from uranium oxide in a manner which substantially eliminates the formation of uranium-containing wastes. A source of uranium dioxide is first provided, for example, by reducing uranium trioxide (UO{sub 3}), or any other substantially stable uranium oxide, to form the uranium dioxide (UO{sub 2}). This uranium dioxide is then chlorinated to form uranium tetrachloride (UCl{sub 4}), and the uranium tetrachloride is then reduced to metallic uranium by reacting the uranium chloride with a metal which will form the chloride of the metal. This last step may be carried out in the presence of another metal capable of forming one or more alloys with metallic uranium to thereby lower the melting point of the reduced uranium product. The metal chloride formed during the uranium tetrachloride reduction step may then be reduced in an electrolysis cell to recover and recycle the metal back to the uranium tetrachloride reduction operation and the chlorine gas back to the uranium dioxide chlorination operation. 4 figs.
Process for continuous production of metallic uranium and uranium alloys
Hayden, Jr., Howard W.; Horton, James A.; Elliott, Guy R. B.
1995-01-01
A method is described for forming metallic uranium, or a uranium alloy, from uranium oxide in a manner which substantially eliminates the formation of uranium-containing wastes. A source of uranium dioxide is first provided, for example, by reducing uranium trioxide (UO.sub.3), or any other substantially stable uranium oxide, to form the uranium dioxide (UO.sub.2). This uranium dioxide is then chlorinated to form uranium tetrachloride (UCl.sub.4), and the uranium tetrachloride is then reduced to metallic uranium by reacting the uranium chloride with a metal which will form the chloride of the metal. This last step may be carried out in the presence of another metal capable of forming one or more alloys with metallic uranium to thereby lower the melting point of the reduced uranium product. The metal chloride formed during the uranium tetrachloride reduction step may then be reduced in an electrolysis cell to recover and recycle the metal back to the uranium tetrachloride reduction operation and the chlorine gas back to the uranium dioxide chlorination operation.
NASA Astrophysics Data System (ADS)
Condon, D.; Noble, S.; McLean, N.; Bowring, S. A.
2009-12-01
We have determined 238U/235U ratios for a suite of commonly used natural (CRM 112a, SRM 950a, HU-1) and synthetic (IRMM 184 and CRM U500) uranium reference materials in addition to several U-bearing accessory phases (zircon and monazite) by thermal ionisation mass-spectrometry (TIMS) using the IRMM 3636 233U-236U double spike to accurately correct for mass fractionation. The 238U/235U values for the natural uranium reference materials differ, by up to 0.1%, from the widely used ‘consensus’ value (137.88) with all having 238U/235U values less than 137.88. Similarly, initial 238U/235U data from zircon and monazite yield 238U/235U values that are lower than the ‘consensus’ value. The data obtained from U-bearing minerals is used to assess how the uncertainty in the 238U/235U ratio contributes to the systematic discordance observed in 238U/206Pb and 235U/207Pb dates (Mattinson, 2000; Schoene et al., 2006) which has traditionally been wholly attributed to error in the U decay constants. The 238U/235U determinations made on the synthetic reference materials yield results that are considerably more precise and accurate than the certified values (0.02% vs. 0.1% for CRM U500). The calibration of isotopic tracers used for U-daughter geochronology that are partially based upon these reference materials, and the resultant age determinations, will benefit from increased accuracy and precision. Mattinson, J.M., 2000. Revising the “gold standard”—the uranium decay constants of Jaffey et al., 1971. Eos Trans. AGU, Spring Meet. Suppl., Abstract V61A-02. Schoene B., Crowley J.L., Condon D.C., Schmitz M.D., Bowring S.A., 2006, Reassessing the uranium decay constants for geochronology using ID-TIMS U-Pb data. Geochimica et Cosmochimica Acta 70: 426-445
Validation of the SEPHIS Program for the Modeling of the HM Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyser, E.A.
The SEPHIS computer program is currently being used to evaluate the effect of all process variables on the criticality safety of the HM 1st Uranium Cycle process in H Canyon. The objective of its use has three main purposes. (1) To provide a better technical basis for those process variables that do not have any realistic effect on the criticality safety of the process. (2) To qualitatively study those conditions that have been previously recognized to affect the nuclear safety of the process or additional conditions that modeling has indicated may pose a criticality safety issue. (3) To judge themore » adequacy of existing or future neutron monitors locations in the detection of the initial stages of reflux for specific scenarios.Although SEPHIS generally over-predicts the distribution of uranium to the organic phase, it is a capable simulation tool as long as the user recognizes its biases and takes special care when using the program for scenarios where the prediction bias is non-conservative. The temperature coefficient used by SEPHIS is poor at predicting effect of temperature on uranium extraction for the 7.5 percent TBP used in the HM process. Therefore, SEPHIS should not be used to study temperature related scenarios. However, within normal operating temperatures when other process variables are being studied, it may be used. Care must be is given to understanding the prediction bias and its effect on any conclusion for the particular scenario that is under consideration. Uranium extraction with aluminum nitrate is over-predicted worse than for nitric acid systems. However, the extraction section of the 1A bank has sufficient excess capability that these errors, while relatively large, still allow SEPHIS to be used to develop reasonable qualitative assessments for reflux scenarios. However, high losses to the 1AW stream cannot be modeled by SEPHIS.« less
ERIC Educational Resources Information Center
Biomedical Interdisciplinary Curriculum Project, Berkeley, CA.
This student text presents instructional materials for a unit of mathematics within the Biomedical Interdisciplinary Curriculum Project (BICP), a two-year interdisciplinary precollege curriculum aimed at preparing high school students for entry into college and vocational programs leading to a career in the health field. Lessons concentrate on…
ERIC Educational Resources Information Center
Biomedical Interdisciplinary Curriculum Project, Berkeley, CA.
This instructor's manual presents lesson plans for a unit of mathematics within the Biomedical Interdisciplinary Curriculum Project (BICP), a two-year interdisciplinary precollege curriculum aimed at preparing high school students for entry into college and vocational programs leading to a career in the health field. Lessons concentrate on…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.
Upward Flame Propagation and Wire Insulation Flammability: 2006 Round Robin Data Analysis
NASA Technical Reports Server (NTRS)
Hirsch, David B.
2007-01-01
This viewgraph document reviews test results from tests of different material used for wire insulation for flame propagation and flammability. The presentation focused on investigating data variability both within and between laboratories; evaluated the between-laboratory consistency through consistency statistic h, which indicates how one laboratory s cell average compares with averages from other labs; evaluated the within-laboratory consistency through the consistency statistic k, which is an indicator of how one laboratory s within-laboratory variability compares with the variability of other labs combined; and extreme results were tested to determine whether they resulted by chance or from nonrandom causes (human error, instrument calibration shift, non-adherence to procedures, etc.)
Practical pulse engineering: Gradient ascent without matrix exponentiation
NASA Astrophysics Data System (ADS)
Bhole, Gaurav; Jones, Jonathan A.
2018-06-01
Since 2005, there has been a huge growth in the use of engineered control pulses to perform desired quantum operations in systems such as nuclear magnetic resonance quantum information processors. These approaches, which build on the original gradient ascent pulse engineering algorithm, remain computationally intensive because of the need to calculate matrix exponentials for each time step in the control pulse. In this study, we discuss how the propagators for each time step can be approximated using the Trotter-Suzuki formula, and a further speedup achieved by avoiding unnecessary operations. The resulting procedure can provide substantial speed gain with negligible costs in the propagator error, providing a more practical approach to pulse engineering.
Kramers-Kronig based quality factor for shear wave propagation in soft tissue
Urban, M W; Greenleaf, J F
2009-01-01
Shear wave propagation techniques have been introduced for measuring the viscoelastic material properties of tissue, but assessing the accuracy of these measurements is difficult for in vivo measurements in tissue. We propose using the Kramers-Kronig relationships to assess the consistency and quality of the measurements of shear wave attenuation and phase velocity. In ex vivo skeletal muscle we measured the wave attenuation at different frequencies, and then applied finite bandwidth Kramers-Kronig equations to predict the phase velocities. We compared these predictions with the measured phase velocities and assessed the mean square error (MSE) as a quality factor. An algorithm was derived for computing a quality factor using the Kramers-Kronig relationships. PMID:19759409
Method for converting uranium oxides to uranium metal
Duerksen, Walter K.
1988-01-01
A process is described for converting scrap and waste uranium oxide to uranium metal. The uranium oxide is sequentially reduced with a suitable reducing agent to a mixture of uranium metal and oxide products. The uranium metal is then converted to uranium hydride and the uranium hydride-containing mixture is then cooled to a temperature less than -100.degree. C. in an inert liquid which renders the uranium hydride ferromagnetic. The uranium hydride is then magnetically separated from the cooled mixture. The separated uranium hydride is readily converted to uranium metal by heating in an inert atmosphere. This process is environmentally acceptable and eliminates the use of hydrogen fluoride as well as the explosive conditions encountered in the previously employed bomb-reduction processes utilized for converting uranium oxides to uranium metal.
Mathematical models and photogrammetric exploitation of image sensing
NASA Astrophysics Data System (ADS)
Puatanachokchai, Chokchai
Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.
Orbit determination of highly elliptical Earth orbiters using improved Doppler data-processing modes
NASA Technical Reports Server (NTRS)
Estefan, J. A.
1995-01-01
A navigation error covariance analysis of four highly elliptical Earth orbits is described, with apogee heights ranging from 20,000 to 76,800 km and perigee heights ranging from 1,000 to 5,000 km. This analysis differs from earlier studies in that improved navigation data-processing modes were used to reduce the radio metric data. For this study, X-band (8.4-GHz) Doppler data were assumed to be acquired from two Deep Space Network radio antennas and reconstructed orbit errors propagated over a single day. Doppler measurements were formulated as total-count phase measurements and compared to the traditional formulation of differenced-count frequency measurements. In addition, an enhanced data-filtering strategy was used, which treated the principal ground system calibration errors affecting the data as filter parameters. Results suggest that a 40- to 60-percent accuracy improvement may be achievable over traditional data-processing modes in reconstructed orbit errors, with a substantial reduction in reconstructed velocity errors at perigee. Historically, this has been a regime in which stringent navigation requirements have been difficult to meet by conventional methods.
Linking models and data on vegetation structure
NASA Astrophysics Data System (ADS)
Hurtt, G. C.; Fisk, J.; Thomas, R. Q.; Dubayah, R.; Moorcroft, P. R.; Shugart, H. H.
2010-06-01
For more than a century, scientists have recognized the importance of vegetation structure in understanding forest dynamics. Now future satellite missions such as Deformation, Ecosystem Structure, and Dynamics of Ice (DESDynI) hold the potential to provide unprecedented global data on vegetation structure needed to reduce uncertainties in terrestrial carbon dynamics. Here, we briefly review the uses of data on vegetation structure in ecosystem models, develop and analyze theoretical models to quantify model-data requirements, and describe recent progress using a mechanistic modeling approach utilizing a formal scaling method and data on vegetation structure to improve model predictions. Generally, both limited sampling and coarse resolution averaging lead to model initialization error, which in turn is propagated in subsequent model prediction uncertainty and error. In cases with representative sampling, sufficient resolution, and linear dynamics, errors in initialization tend to compensate at larger spatial scales. However, with inadequate sampling, overly coarse resolution data or models, and nonlinear dynamics, errors in initialization lead to prediction error. A robust model-data framework will require both models and data on vegetation structure sufficient to resolve important environmental gradients and tree-level heterogeneity in forest structure globally.
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
Performance of a laser microsatellite network with an optical preamplifier.
Arnon, Shlomi
2005-04-01
Laser satellite communication (LSC) uses free space as a propagation medium for various applications, such as intersatellite communication or satellite networking. An LSC system includes a laser transmitter and an optical receiver. For communication to occur, the line of sight of the transmitter and the receiver must be aligned. However, mechanical vibration and electronic noise in the control system reduce alignment between the transmitter laser beam and the receiver field of view (FOV), which results in pointing errors. The outcome of pointing errors is fading of the received signal, which leads to impaired link performance. An LSC system is considered in which the optical preamplifier is incorporated into the receiver, and a bit error probability (BEP) model is derived that takes into account the statistics of the pointing error as well as the optical amplifier and communication system parameters. The model and the numerical calculation results indicate that random pointing errors of sigma(chi)2G > 0.05 penalize communication performance dramatically for all combinations of optical amplifier gains and noise figures that were calculated.
Neural Network Compensation for Frequency Cross-Talk in Laser Interferometry
NASA Astrophysics Data System (ADS)
Lee, Wooram; Heo, Gunhaeng; You, Kwanho
The heterodyne laser interferometer acts as an ultra-precise measurement apparatus in semiconductor manufacture. However the periodical nonlinearity property caused from frequency cross-talk is an obstacle to improve the high measurement accuracy in nanometer scale. In order to minimize the nonlinearity error of the heterodyne interferometer, we propose a frequency cross-talk compensation algorithm using an artificial intelligence method. The feedforward neural network trained by back-propagation compensates the nonlinearity error and regulates to minimize the difference with the reference signal. With some experimental results, the improved accuracy is proved through comparison with the position value from a capacitive displacement sensor.
Robust vector quantization for noisy channels
NASA Technical Reports Server (NTRS)
Demarca, J. R. B.; Farvardin, N.; Jayant, N. S.; Shoham, Y.
1988-01-01
The paper briefly discusses techniques for making vector quantizers more tolerant to tranmsission errors. Two algorithms are presented for obtaining an efficient binary word assignment to the vector quantizer codewords without increasing the transmission rate. It is shown that about 4.5 dB gain over random assignment can be achieved with these algorithms. It is also proposed to reduce the effects of error propagation in vector-predictive quantizers by appropriately constraining the response of the predictive loop. The constrained system is shown to have about 4 dB of SNR gain over an unconstrained system in a noisy channel, with a small loss of clean-channel performance.
A Real-Time High Performance Data Compression Technique For Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2001.
Decision feedback equalizer for holographic data storage.
Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo
2018-05-20
Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.
Experimental evaluation of multiprocessor cache-based error recovery
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. K.
1991-01-01
Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement ofmore » path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.« less
Zhao, Guo; Wang, Hui; Liu, Gang; Wang, Zhiqiang
2016-09-21
An easy, but effective, method has been proposed to detect and quantify the Pb(II) in the presence of Cd(II) based on a Bi/glassy carbon electrode (Bi/GCE) with the combination of a back propagation artificial neural network (BP-ANN) and square wave anodic stripping voltammetry (SWASV) without further electrode modification. The effects of Cd(II) in different concentrations on stripping responses of Pb(II) was studied. The results indicate that the presence of Cd(II) will reduce the prediction precision of a direct calibration model. Therefore, a two-input and one-output BP-ANN was built for the optimization of a stripping voltammetric sensor, which considering the combined effects of Cd(II) and Pb(II) on the SWASV detection of Pb(II) and establishing the nonlinear relationship between the stripping peak currents of Pb(II) and Cd(II) and the concentration of Pb(II). The key parameters of the BP-ANN and the factors affecting the SWASV detection of Pb(II) were optimized. The prediction performance of direct calibration model and BP-ANN model were tested with regard to the mean absolute error (MAE), root mean square error (RMSE), average relative error (ARE), and correlation coefficient. The results proved that the BP-ANN model exhibited higher prediction accuracy than the direct calibration model. Finally, a real samples analysis was performed to determine trace Pb(II) in some soil specimens with satisfactory results.
VizieR Online Data Catalog: Spitzer Atlas of Stellar Spectra (SASS) (Ardila+, 2010)
NASA Astrophysics Data System (ADS)
Ardila, D. R.; van Dyk, S. D.; Makowiecki, W.; Stauffer, J.; Song, I.; Rho, J.; Fajardo-Acosta, S.; Hoard, D. W.; Wachter, S.
2010-11-01
From IRS Staring observations in the Spitzer archive we selected those stellar targets that had been observed with all the low-resolution IRS modules. We did not include known young stars with circumstellar material, stars known to harbor debris disks, or objects classified in SIMBAD as RS CVn, Be stars, or eclipsing binaries. We have also avoided classes already fully described with IRAS, ISO, or Spitzer, such as Asymptotic Giant Branch stars and rejected targets presenting IR excesses. However, note that in the case of very massive and/or evolved stars there are few objects presenting a pure photospheric spectrum. A few stars are specifically selected for their intrinsic interest regardless of their IR excess and even if the Atlas already contained another star with the same spectral type. The spectral coverage only reaches to 14um in the case of very late spectral classes (late M, L and T dwarfs) and some WR stars for which the long wavelength modules are unusable or not present in the archive. The spectral types have been taken from (in order of priority): * NStED (http://nsted.ipac.caltech.edu/), * NStars (http://nstars.nau.edu/nau_nstars/about.htm), * the Tycho-2 Spectral Type Catalog (Cat. III/231) * SIMBAD. For certain types of objects, we have used specialized catalogs as the source of the spectral types. The data were processed with the Spitzer Science Center S18.7.0 pipelined and corrected for teardrop effects, slit position uncertainties, residual flat-field errors, residual model errors, 24um flux deficit (1), fringing, and order mismatches. The Atlas files contain an error value for each wavelength, intended to represent the random 1sig error at that wavelength. This is the error provided by the SSC's S18.7.0 pipeline and propagated along the reduction procedure. The treatment of errors remains incomplete in this pipeline (2). The errors provided here should be considered carefully, before propagating them into further calculations. However, the processing insures that the spectra do not have strong spurious emission or absorption lines in large signal-to-noise regions. (1) http://ssc.spitzer.caltech.edu/irs/irsinstrumenthandbook/102/ #Toc253561116 (2) http://ssc.spitzer.caltech.edu/irs/irsinstrumenthandbook/ (4 data files).
Effect of correlated observation error on parameters, predictions, and uncertainty
Tiedeman, Claire; Green, Christopher T.
2013-01-01
Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.
NASA Astrophysics Data System (ADS)
Waeldele, F.
1983-01-01
The influence of sample shape deviations on the measurement uncertainties and the optimization of computer aided coordinate measurement were investigated for a circle and a cylinder. Using the complete error propagation law in matrix form the parameter uncertainties are calculated, taking the correlation between the measurement points into account. Theoretical investigations show that the measuring points have to be equidistantly distributed and that for a cylindrical body a measuring point distribution along a cross section is better than along a helical line. The theoretically obtained expressions to calculate the uncertainties prove to be a good estimation basis. The simple error theory is not satisfactory for estimation. The complete statistical data analysis theory helps to avoid aggravating measurement errors and to adjust the number of measuring points to the required measuring uncertainty.
The theory precision analyse of RFM localization of satellite remote sensing imagery
NASA Astrophysics Data System (ADS)
Zhang, Jianqing; Xv, Biao
2009-11-01
The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.
Short-Block Protograph-Based LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher
2010-01-01
Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.
NASA Technical Reports Server (NTRS)
Auger, Ludovic; Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. The truncation is carried out in such a way that the resolution of the error covariance, is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance, by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and a growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the tracer field.
Lievens, Hans; Vernieuwe, Hilde; Álvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E.C.
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. PMID:22399956
Detection of layup errors in prepreg laminates using shear ultrasonic waves
NASA Astrophysics Data System (ADS)
Hsu, David K.; Fischer, Brent A.
1996-11-01
The highly anisotropic elastic properties of the plies in a composite laminate manufactured from unidirectional prepregs interact strongly with the polarization direction of shear ultrasonic waves propagating through its thickness. The received signals in a 'crossed polarizer' transmission configuration are particularly sensitive to ply orientation and layup sequence in a laminate. Such measurements can therefore serve as an NDE tool for detecting layup errors. For example, it was shown experimentally recently that the sensitivity for detecting the presence of misoriented plies is better than one ply out of a 48-ply laminate of graphite epoxy. A physical model based on the decomposition and recombination of the shear polarization vector has been constructed and used in the interpretation and prediction of test results. Since errors should be detected early in the manufacturing process, this work also addresses the inspection of 'green' composite laminates using electromagnetic acoustic transducers (EMAT). Preliminary results for ply error detection obtained with EMAT probes are described.
NASA Astrophysics Data System (ADS)
Streicher, Michael; Brown, Steven; Zhu, Yuefeng; Goodman, David; He, Zhong
2016-10-01
To accurately characterize shielded special nuclear materials (SNM) using passive gamma-ray spectroscopy measurement techniques, the effective atomic number and the thickness of shielding materials must be measured. Intervening materials between the source and detector may affect the estimated source isotopics (uranium enrichment and plutonium grade) for techniques which rely on raw count rates or photopeak ratios of gamma-ray lines separated in energy. Furthermore, knowledge of the surrounding materials can provide insight regarding the configuration of a device containing SNM. The described method was developed using spectra recorded using high energy resolution CdZnTe detectors, but can be expanded to any gamma-ray spectrometers with energy resolution of better than 1% FWHM at 662 keV. The effective atomic number, Z, and mass thickness of the intervening shielding material are identified by comparing the relative attenuation of different gamma-ray lines and estimating the proportion of Compton scattering interactions to photoelectric absorptions within the shield. While characteristic Kα x-rays can be used to identify shielding materials made of high Z elements, this method can be applied to all shielding materials. This algorithm has adequately estimated the effective atomic number for shields made of iron, aluminum, and polyethylene surrounding uranium samples using experimental data. The mass thicknesses of shielding materials have been estimated with a standard error of less than 1.3 g/cm2 for iron shields up to 2.5 cm thick. The effective atomic number was accurately estimated to 26 ± 5 for all iron thicknesses.
VISION User Guide - VISION (Verifiable Fuel Cycle Simulation) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacob J. Jacobson; Robert F. Jeffers; Gretchen E. Matthern
2009-08-01
The purpose of this document is to provide a guide for using the current version of the Verifiable Fuel Cycle Simulation (VISION) model. This is a complex model with many parameters; the user is strongly encouraged to read this user guide before attempting to run the model. This model is an R&D work in progress and may contain errors and omissions. It is based upon numerous assumptions. This model is intended to assist in evaluating “what if” scenarios and in comparing fuel, reactor, and fuel processing alternatives at a systems level for U.S. nuclear power. The model is not intendedmore » as a tool for process flow and design modeling of specific facilities nor for tracking individual units of fuel or other material through the system. The model is intended to examine the interactions among the components of a fuel system as a function of time varying system parameters; this model represents a dynamic rather than steady-state approximation of the nuclear fuel system. VISION models the nuclear cycle at the system level, not individual facilities, e.g., “reactor types” not individual reactors and “separation types” not individual separation plants. Natural uranium can be enriched, which produces enriched uranium, which goes into fuel fabrication, and depleted uranium (DU), which goes into storage. Fuel is transformed (transmuted) in reactors and then goes into a storage buffer. Used fuel can be pulled from storage into either separation of disposal. If sent to separations, fuel is transformed (partitioned) into fuel products, recovered uranium, and various categories of waste. Recycled material is stored until used by its assigned reactor type. Note that recovered uranium is itself often partitioned: some RU flows with recycled transuranic elements, some flows with wastes, and the rest is designated RU. RU comes out of storage if needed to correct the U/TRU ratio in new recycled fuel. Neither RU nor DU are designated as wastes. VISION is comprised of several Microsoft Excel input files, a Powersim Studio core, and several Microsoft Excel output files. All must be co-located in the same folder on a PC to function. We use Microsoft Excel 2003 and have not tested VISION with Microsoft Excel 2007. The VISION team uses both Powersim Studio 2005 and 2009 and it should work with either.« less
40 CFR 421.320 - Applicability: Description of the secondary uranium subcategory.
Code of Federal Regulations, 2012 CFR
2012-07-01
... secondary uranium subcategory. 421.320 Section 421.320 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Secondary Uranium Subcategory § 421.320 Applicability: Description of the secondary uranium... uranium (including depleted uranium) by secondary uranium facilities. ...
40 CFR 421.320 - Applicability: Description of the secondary uranium subcategory.
Code of Federal Regulations, 2011 CFR
2011-07-01
... secondary uranium subcategory. 421.320 Section 421.320 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Secondary Uranium Subcategory § 421.320 Applicability: Description of the secondary uranium... uranium (including depleted uranium) by secondary uranium facilities. ...
40 CFR 421.320 - Applicability: Description of the secondary uranium subcategory.
Code of Federal Regulations, 2013 CFR
2013-07-01
... secondary uranium subcategory. 421.320 Section 421.320 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Secondary Uranium Subcategory § 421.320 Applicability: Description of the secondary uranium... uranium (including depleted uranium) by secondary uranium facilities. ...
40 CFR 421.320 - Applicability: Description of the secondary uranium subcategory.
Code of Federal Regulations, 2014 CFR
2014-07-01
... secondary uranium subcategory. 421.320 Section 421.320 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Secondary Uranium Subcategory § 421.320 Applicability: Description of the secondary uranium... uranium (including depleted uranium) by secondary uranium facilities. ...
40 CFR 421.320 - Applicability: Description of the secondary uranium subcategory.
Code of Federal Regulations, 2010 CFR
2010-07-01
... secondary uranium subcategory. 421.320 Section 421.320 Protection of Environment ENVIRONMENTAL PROTECTION... CATEGORY Secondary Uranium Subcategory § 421.320 Applicability: Description of the secondary uranium... uranium (including depleted uranium) by secondary uranium facilities. ...
Bioremediation of uranium contamination with enzymatic uranium reduction
Lovley, D.R.; Phillips, E.J.P.
1992-01-01
Enzymatic uranium reduction by Desulfovibrio desulfuricans readily removed uranium from solution in a batch system or when D. desulfuricans was separated from the bulk of the uranium-containing water by a semipermeable membrane. Uranium reduction continued at concentrations as high as 24 mM. Of a variety of potentially inhibiting anions and metals evaluated, only high concentrations of copper inhibited uranium reduction. Freeze-dried cells, stored aerobically, reduced uranium as fast as fresh cells. D. desulfuricans reduced uranium in pH 4 and pH 7.4 mine drainage waters and in uraniumcontaining groundwaters from a contaminated Department of Energy site. Enzymatic uranium reduction has several potential advantages over other bioprocessing techniques for uranium removal, the most important of which are as follows: the ability to precipitate uranium that is in the form of a uranyl carbonate complex; high capacity for uranium removal per cell; the formation of a compact, relatively pure, uranium precipitate.
A Monte-Carlo Bayesian framework for urban rainfall error modelling
NASA Astrophysics Data System (ADS)
Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian
2016-04-01
Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.
Koyama, Tatsuya; Iwasaki, Atsushi; Ogoshi, Yosuke; Okada, Eiji
2005-04-10
A practical and adequate approach to modeling light propagation in an adult head with a low-scattering cerebrospinal fluid (CSF) region by use of diffusion theory was investigated. The diffusion approximation does not hold in a nonscattering or low-scattering regions. The hybrid radiosity-diffusion method was adopted to model the light propagation in the head with a nonscattering region. In the hybrid method the geometry of the nonscattering region is acquired as a priori information. In reality, low-level scattering occurs in the CSF region and may reduce the error caused by the diffusion approximation. The partial optical path length and the spatial sensitivity profile calculated by the finite-element method agree well with those calculated by the Monte Carlo method in the case in which the transport scattering coefficient of the CSF layer is greater than 0.3 mm(-1). Because it is feasible to assume that the transport scattering coefficient of a CSF layer is 0.3 mm(-1), it is practical to adopt diffusion theory to the modeling of light propagation in an adult head as an alternative to the hybrid method.
NASA Astrophysics Data System (ADS)
Koyama, Tatsuya; Iwasaki, Atsushi; Ogoshi, Yosuke; Okada, Eiji
2005-04-01
A practical and adequate approach to modeling light propagation in an adult head with a low-scattering cerebrospinal fluid (CSF) region by use of diffusion theory was investigated. The diffusion approximation does not hold in a nonscattering or low-scattering regions. The hybrid radiosity-diffusion method was adopted to model the light propagation in the head with a nonscattering region. In the hybrid method the geometry of the nonscattering region is acquired as a priori information. In reality, low-level scattering occurs in the CSF region and may reduce the error caused by the diffusion approximation. The partial optical path length and the spatial sensitivity profile calculated by the finite-element method agree well with those calculated by the Monte Carlo method in the case in which the transport scattering coefficient of the CSF layer is greater than 0.3 mm^-1. Because it is feasible to assume that the transport scattering coefficient of a CSF layer is 0.3 mm^-1, it is practical to adopt diffusion theory to the modeling of light propagation in an adult head as an alternative to the hybrid method.
NASA Astrophysics Data System (ADS)
Kenok, R.; Jomdecha, C.; Jirarungsatian, C.
The aim of this paper is to study the acoustic emission (AE) parameters obtained from CNG cylinders during pressurization. AE from flaw propagation, material integrity, and pressuring of cylinder was the main objective for characterization. CNG cylinders of ISO 11439, resin fully wrapped type and metal liner type, were employed to test by hydrostatic stressing. The pressure was step increased until 1.1 time of operating pressure. Two AE sensors, resonance frequency of 150 kHz, were mounted on the cylinder wall to detect the AE throughout the testing. From the experiment results, AE can be detected from pressuring rate, material integrity, and flaw propagation from the cylinder wall. AE parameters including Amplitude, Count, Energy (MARSE), Duration and Rise time were analyzed to distinguish the AE data. The results show that the AE of flaw propagation was different in character from that of pressurization. Especially, AE detected from flaws of resin wrapped and metal liner was significantly different. To locate the flaw position, both the AE sensors can be accurately used to locate the flaw propagation in a linear pattern. The error was less than ±5 cm.
Large Eddy Simulation of Turbulent Combustion
2005-10-01
a new method to automatically generate skeletal kinetic mechanisms for surrogate fuels, using the directed relation graph method with error...propagation, was developed. These mechanisms are guaranteed to match results obtained using detailed chemistry within a user- defined accuracy for any...specified target. They can be combined together to produce adequate chemical models for surrogate fuels. A library containing skeletal mechanisms of various
2008-09-04
mospheric correction. volume 3756, pages 348–353. SPIE, 1999. Daniel Birkenheuer and Seth Gutman. A Comparison of GOES Moisture-Derived Product and GPS...pages 417–428. SPIE, 2001. E. J. Ientilucci and S. D. Brown. Advances in wide-area hyperspectral image sim- ulation. In W. R. Watkins , D. Clement
Fast discrete cosine transform structure suitable for implementation with integer computation
NASA Astrophysics Data System (ADS)
Jeong, Yeonsik; Lee, Imgeun
2000-10-01
The discrete cosine transform (DCT) has wide applications in speech and image coding. We propose a fast DCT scheme with the property of reduced multiplication stages and fewer additions and multiplications. The proposed algorithm is structured so that most multiplications are performed at the final stage, which reduces the propagation error that could occur in the integer computation.
Martin P. Schilling; Paul G. Wolf; Aaron M. Duffy; Hardeep S. Rai; Carol A. Rowe; Bryce A. Richardson; Karen E. Mock
2014-01-01
Continuing advances in nucleotide sequencing technology are inspiring a suite of genomic approaches in studies of natural populations. Researchers are faced with data management and analytical scales that are increasing by orders of magnitude. With such dramatic advances comes a need to understand biases and error rates, which can be propagated and magnified in large-...
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
Integrating LIDAR and forest inventories to fill the trees outside forests data gap
Kristofer D. Johnson; Richard Birdsey; Jason Cole; Anu Swatantran; Jarlath O' Neil-Dunne; Ralph Dubayah; Andrew Lister
2015-01-01
Forest inventories are commonly used to estimate total tree biomass of forest land even though they are not traditionally designed to measure biomass of trees outside forests (TOF). The consequence may be an inaccurate representation of all of the aboveground biomass, which propagates error to the outputs of spatial and process models that rely on the inventory data....
Release behavior of uranium in uranium mill tailings under environmental conditions.
Liu, Bo; Peng, Tongjiang; Sun, Hongjuan; Yue, Huanjuan
2017-05-01
Uranium contamination is observed in sedimentary geochemical environments, but the geochemical and mineralogical processes that control uranium release from sediment are not fully appreciated. Identification of how sediments and water influence the release and migration of uranium is critical to improve the prevention of uranium contamination in soil and groundwater. To understand the process of uranium release and migration from uranium mill tailings under water chemistry conditions, uranium mill tailing samples from northwest China were investigated with batch leaching experiments. Results showed that water played an important role in uranium release from the tailing minerals. The uranium release was clearly influenced by contact time, liquid-solid ratio, particle size, and pH under water chemistry conditions. Longer contact time, higher liquid content, and extreme pH were all not conducive to the stabilization of uranium and accelerated the uranium release from the tailing mineral to the solution. The values of pH were found to significantly influence the extent and mechanisms of uranium release from minerals to water. Uranium release was monitored by a number of interactive processes, including dissolution of uranium-bearing minerals, uranium desorption from mineral surfaces, and formation of aqueous uranium complexes. Considering the impact of contact time, liquid-solid ratio, particle size, and pH on uranium release from uranium mill tailings, reducing the water content, decreasing the porosity of tailing dumps and controlling the pH of tailings were the key factors for prevention and management of environmental pollution in areas near uranium mines. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reliability analysis of magnetic logic interconnect wire subjected to magnet edge imperfections
NASA Astrophysics Data System (ADS)
Zhang, Bin; Yang, Xiaokuo; Liu, Jiahao; Li, Weiwei; Xu, Jie
2018-02-01
Nanomagnet logic (NML) devices have been proposed as one of the best candidates for the next generation of integrated circuits thanks to its substantial advantages of nonvolatility, radiation hardening and potentially low power. In this article, errors of nanomagnetic interconnect wire subjected to magnet edge imperfections have been evaluated for the purpose of reliable logic propagation. The missing corner defects of nanomagnet in the wire are modeled with a triangle, and the interconnect fabricated with various magnetic materials is thoroughly investigated by micromagnetic simulations under different corner defect amplitudes and device spacings. The results show that as the defect amplitude increases, the success rate of logic propagation in the interconnect decreases. More results show that from the interconnect wire fabricated with materials, iron demonstrates the best defect tolerance ability among three representative and frequently used NML materials, also logic transmission errors can be mitigated by adjusting spacing between nanomagnets. These findings can provide key technical guides for designing reliable interconnects. Project supported by the National Natural Science Foundation of China (No. 61302022) and the Scientific Research Foundation for Postdoctor of Air Force Engineering University (Nos. 2015BSKYQD03, 2016KYMZ06).
NASA Astrophysics Data System (ADS)
Cisneros, Felipe; Veintimilla, Jaime
2013-04-01
The main aim of this research is to create a model of Artificial Neural Networks (ANN) that allows predicting the flow in Tomebamba River both, at real time and in a certain day of year. As inputs we are using information of rainfall and flow of the stations along of the river. This information is organized in scenarios and each scenario is prepared to a specific area. The information is acquired from the hydrological stations placed in the watershed using an electronic system developed at real time and it supports any kind or brands of this type of sensors. The prediction works very good three days in advance This research includes two ANN models: Back propagation and a hybrid model between back propagation and OWO-HWO. These last two models have been tested in a preliminary research. To validate the results we are using some error indicators such as: MSE, RMSE, EF, CD and BIAS. The results of this research reached high levels of reliability and the level of error are minimal. These predictions are useful for flood and water quality control and management at City of Cuenca Ecuador
NASA Astrophysics Data System (ADS)
Lyu, Jiang-Tao; Zhou, Chen
2017-12-01
Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.
Phase Retrieval for Radio Telescope and Antenna Control
NASA Technical Reports Server (NTRS)
Dean, Bruce
2011-01-01
Phase-retrieval is a general term used in optics to describe the estimation of optical imperfections or "aberrations." The purpose of this innovation is to develop the application of phase retrieval to radio telescope and antenna control in the millimeter wave band. Earlier techniques do not approximate the incoherent subtraction process as a coherent propagation. This approximation reduces the noise in the data and allows a straightforward application of conventional phase retrieval techniques for radio telescope and antenna control. The application of iterative-transform phase retrieval to radio telescope and antenna control is made by approximating the incoherent subtraction process as a coherent propagation. Thus, for systems utilizing both positive and negative polarity feeds, this approximation allows both surface and alignment errors to be assessed without the use of additional hardware or laser metrology. Knowledge of the antenna surface profile allows errors to be corrected at a given surface temperature and observing angle. In addition to imperfections of the antenna surface figure, the misalignment of multiple antennas operating in unison can reduce or degrade the signal-to-noise ratio of the received or broadcast signals. This technique also has application to the alignment of antenna array configurations.
A Framework for Estimating Stratospheric Wind Speeds from Infrasound Noise
NASA Astrophysics Data System (ADS)
Arrowsmith, S.; Marcillo, O. E.
2012-12-01
We present a methodology for infrasonic remote sensing of winds in the stratosphere that does not require discrete ground-truth events. Our method uses measured time delays between arrays of sensors to provide group velocities and then minimizes the difference between observed and predicted group velocities. Because we focus on inter-array propagation effects, it is not necessary to simulate the full propagation path from source to receiver. This feature allows us to use a relatively simple forward model that is applicable over short-regional distances. By focusing on stratospheric returns, we show that our nonlinear inversion scheme converges much better if the starting model contains a strong stratospheric duct. Using the HWM/MSISE model, we demonstrate that the inversion scheme is robust to large uncertainties in backazimuth, but that uncertainties in the measured trace velocity and group velocity should be controlled through the addition of adjoint constraints. Using realistic estimates of measurement error, our results show that the inversion scheme will nevertheless improve upon a starting model under most scenarios for the 9-array Utah infrasound network. Future research should investigate the effects of model error associated with these measurements.
Estimating Effects of Multipath Propagation on GPS Signals
NASA Technical Reports Server (NTRS)
Byun, Sung; Hajj, George; Young, Lawrence
2005-01-01
Multipath Simulator Taking into Account Reflection and Diffraction (MUSTARD) is a computer program that simulates effects of multipath propagation on received Global Positioning System (GPS) signals. MUSTARD is a very efficient means of estimating multipath-induced position and phase errors as functions of time, given the positions and orientations of GPS satellites, the GPS receiver, and any structures near the receiver as functions of time. MUSTARD traces each signal from a GPS satellite to the receiver, accounting for all possible paths the signal can take, including all paths that include reflection and/or diffraction from surfaces of structures near the receiver and on the satellite. Reflection and diffraction are modeled by use of the geometrical theory of diffraction. The multipath signals are added to the direct signal after accounting for the gain of the receiving antenna. Then, in a simulation of a delay-lock tracking loop in the receiver, the multipath-induced range and phase errors as measured by the receiver are estimated. All of these computations are performed for both right circular polarization and left circular polarization of both the L1 (1.57542-GHz) and L2 (1.2276-GHz) GPS signals.
Propagation of the velocity model uncertainties to the seismic event location
NASA Astrophysics Data System (ADS)
Gesret, A.; Desassis, N.; Noble, M.; Romary, T.; Maisons, C.
2015-01-01
Earthquake hypocentre locations are crucial in many domains of application (academic and industrial) as seismic event location maps are commonly used to delineate faults or fractures. The interpretation of these maps depends on location accuracy and on the reliability of the associated uncertainties. The largest contribution to location and uncertainty errors is due to the fact that the velocity model errors are usually not correctly taken into account. We propose a new Bayesian formulation that integrates properly the knowledge on the velocity model into the formulation of the probabilistic earthquake location. In this work, the velocity model uncertainties are first estimated with a Bayesian tomography of active shot data. We implement a sampling Monte Carlo type algorithm to generate velocity models distributed according to the posterior distribution. In a second step, we propagate the velocity model uncertainties to the seismic event location in a probabilistic framework. This enables to obtain more reliable hypocentre locations as well as their associated uncertainties accounting for picking and velocity model uncertainties. We illustrate the tomography results and the gain in accuracy of earthquake location for two synthetic examples and one real data case study in the context of induced microseismicity.
Damage identification in beams using speckle shearography and an optimal spatial sampling
NASA Astrophysics Data System (ADS)
Mininni, M.; Gabriele, S.; Lopes, H.; Araújo dos Santos, J. V.
2016-10-01
Over the years, the derivatives of modal displacement and rotation fields have been used to localize damage in beams. Usually, the derivatives are computed by applying finite differences. The finite differences propagate and amplify the errors that exist in real measurements, and thus, it is necessary to minimize this problem in order to get reliable damage localizations. A way to decrease the propagation and amplification of the errors is to select an optimal spatial sampling. This paper presents a technique where an optimal spatial sampling of modal rotation fields is computed and used to obtain the modal curvatures. Experimental measurements of modal rotation fields of a beam with single and multiple damages are obtained with shearography, which is an optical technique allowing the measurement of full-fields. These measurements are used to test the validity of the optimal sampling technique for the improvement of damage localization in real structures. An investigation on the ability of a model updating technique to quantify the damage is also reported. The model updating technique is defined by the variations of measured natural frequencies and measured modal rotations and aims at calibrating the values of the second moment of area in the damaged areas, which were previously localized.
Fast Video Encryption Using the H.264 Error Propagation Property for Smart Mobile Devices
Chung, Yongwha; Lee, Sungju; Jeon, Taewoong; Park, Daihee
2015-01-01
In transmitting video data securely over Video Sensor Networks (VSNs), since mobile handheld devices have limited resources in terms of processor clock speed and battery size, it is necessary to develop an efficient method to encrypt video data to meet the increasing demand for secure connections. Selective encryption methods can reduce the amount of computation needed while satisfying high-level security requirements. This is achieved by selecting an important part of the video data and encrypting it. In this paper, to ensure format compliance and security, we propose a special encryption method for H.264, which encrypts only the DC/ACs of I-macroblocks and the motion vectors of P-macroblocks. In particular, the proposed new selective encryption method exploits the error propagation property in an H.264 decoder and improves the collective performance by analyzing the tradeoff between the visual security level and the processing speed compared to typical selective encryption methods (i.e., I-frame, P-frame encryption, and combined I-/P-frame encryption). Experimental results show that the proposed method can significantly reduce the encryption workload without any significant degradation of visual security. PMID:25850068
Automatic 3D segmentation of spinal cord MRI using propagated deformable models
NASA Astrophysics Data System (ADS)
De Leener, B.; Cohen-Adad, J.; Kadoury, S.
2014-03-01
Spinal cord diseases or injuries can cause dysfunction of the sensory and locomotor systems. Segmentation of the spinal cord provides measures of atrophy and allows group analysis of multi-parametric MRI via inter-subject registration to a template. All these measures were shown to improve diagnostic and surgical intervention. We developed a framework to automatically segment the spinal cord on T2-weighted MR images, based on the propagation of a deformable model. The algorithm is divided into three parts: first, an initialization step detects the spinal cord position and orientation by using the elliptical Hough transform on multiple adjacent axial slices to produce an initial tubular mesh. Second, a low-resolution deformable model is iteratively propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a contrast adaptation at each iteration. Third, a refinement process and a global deformation are applied on the low-resolution mesh to provide an accurate segmentation of the spinal cord. Our method was evaluated against a semi-automatic edge-based snake method implemented in ITK-SNAP (with heavy manual adjustment) by computing the 3D Dice coefficient, mean and maximum distance errors. Accuracy and robustness were assessed from 8 healthy subjects. Each subject had two volumes: one at the cervical and one at the thoracolumbar region. Results show a precision of 0.30 +/- 0.05 mm (mean absolute distance error) in the cervical region and 0.27 +/- 0.06 mm in the thoracolumbar region. The 3D Dice coefficient was of 0.93 for both regions.
Zimbelman, Eloise G; Keefe, Robert F
2018-01-01
Real-time positioning on mobile devices using global navigation satellite system (GNSS) technology paired with radio frequency (RF) transmission (GNSS-RF) may help to improve safety on logging operations by increasing situational awareness. However, GNSS positional accuracy for ground workers in motion may be reduced by multipath error, satellite signal obstruction, or other factors. Radio propagation of GNSS locations may also be impacted due to line-of-sight (LOS) obstruction in remote, forested areas. The objective of this study was to characterize the effects of forest stand characteristics, topography, and other LOS obstructions on the GNSS accuracy and radio signal propagation quality of multiple Raveon Atlas PT GNSS-RF transponders functioning as a network in a range of forest conditions. Because most previous research with GNSS in forestry has focused on stationary units, we chose to analyze units in motion by evaluating the time-to-signal accuracy of geofence crossings in 21 randomly-selected stands on the University of Idaho Experimental Forest. Specifically, we studied the effects of forest stand characteristics, topography, and LOS obstructions on (1) the odds of missed GNSS-RF signals, (2) the root mean squared error (RMSE) of Atlas PTs, and (3) the time-to-signal accuracy of safety geofence crossings in forested environments. Mixed-effects models used to analyze the data showed that stand characteristics, topography, and obstructions in the LOS affected the odds of missed radio signals while stand variables alone affected RMSE. Both stand characteristics and topography affected the accuracy of geofence alerts.
2018-01-01
Real-time positioning on mobile devices using global navigation satellite system (GNSS) technology paired with radio frequency (RF) transmission (GNSS-RF) may help to improve safety on logging operations by increasing situational awareness. However, GNSS positional accuracy for ground workers in motion may be reduced by multipath error, satellite signal obstruction, or other factors. Radio propagation of GNSS locations may also be impacted due to line-of-sight (LOS) obstruction in remote, forested areas. The objective of this study was to characterize the effects of forest stand characteristics, topography, and other LOS obstructions on the GNSS accuracy and radio signal propagation quality of multiple Raveon Atlas PT GNSS-RF transponders functioning as a network in a range of forest conditions. Because most previous research with GNSS in forestry has focused on stationary units, we chose to analyze units in motion by evaluating the time-to-signal accuracy of geofence crossings in 21 randomly-selected stands on the University of Idaho Experimental Forest. Specifically, we studied the effects of forest stand characteristics, topography, and LOS obstructions on (1) the odds of missed GNSS-RF signals, (2) the root mean squared error (RMSE) of Atlas PTs, and (3) the time-to-signal accuracy of safety geofence crossings in forested environments. Mixed-effects models used to analyze the data showed that stand characteristics, topography, and obstructions in the LOS affected the odds of missed radio signals while stand variables alone affected RMSE. Both stand characteristics and topography affected the accuracy of geofence alerts. PMID:29324794
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopal, A; Xu, H; Chen, S
Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagationmore » was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS < 0.35, RQS > 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in contour propagation.« less
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Zhang, Tao; Shi, Hongfei; Chen, Liping; Li, Yao; Tong, Jinwu
2016-03-11
This paper researches an AUV (Autonomous Underwater Vehicle) positioning method based on SINS (Strapdown Inertial Navigation System)/LBL (Long Base Line) tightly coupled algorithm. This algorithm mainly includes SINS-assisted searching method of optimum slant-range of underwater acoustic propagation multipath, SINS/LBL tightly coupled model and multi-sensor information fusion algorithm. Fuzzy correlation peak problem of underwater LBL acoustic propagation multipath could be solved based on SINS positional information, thus improving LBL positional accuracy. Moreover, introduction of SINS-centered LBL locating information could compensate accumulative AUV position error effectively and regularly. Compared to loosely coupled algorithm, this tightly coupled algorithm can still provide accurate location information when there are fewer than four available hydrophones (or within the signal receiving range). Therefore, effective positional calibration area of tightly coupled system based on LBL array is wider and has higher reliability and fault tolerance than loosely coupled. It is more applicable to AUV positioning based on SINS/LBL.
Zhang, Tao; Shi, Hongfei; Chen, Liping; Li, Yao; Tong, Jinwu
2016-01-01
This paper researches an AUV (Autonomous Underwater Vehicle) positioning method based on SINS (Strapdown Inertial Navigation System)/LBL (Long Base Line) tightly coupled algorithm. This algorithm mainly includes SINS-assisted searching method of optimum slant-range of underwater acoustic propagation multipath, SINS/LBL tightly coupled model and multi-sensor information fusion algorithm. Fuzzy correlation peak problem of underwater LBL acoustic propagation multipath could be solved based on SINS positional information, thus improving LBL positional accuracy. Moreover, introduction of SINS-centered LBL locating information could compensate accumulative AUV position error effectively and regularly. Compared to loosely coupled algorithm, this tightly coupled algorithm can still provide accurate location information when there are fewer than four available hydrophones (or within the signal receiving range). Therefore, effective positional calibration area of tightly coupled system based on LBL array is wider and has higher reliability and fault tolerance than loosely coupled. It is more applicable to AUV positioning based on SINS/LBL. PMID:26978361
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND
Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159
A derating method for therapeutic applications of high intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.
2010-05-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.