Quantum error correction of continuous-variable states against Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ralph, T. C.
2011-08-15
We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis
NASA Technical Reports Server (NTRS)
Ghrist, Richard W.; Plakalovic, Dragan
2012-01-01
An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.
Quantum Error Correction with a Globally-Coupled Array of Neutral Atom Qubits
2013-02-01
magneto - optical trap ) located at the center of the science cell. Fluorescence...Bottle beam trap GBA Gaussian beam array EMCCD electron multiplying charge coupled device microsec. microsecond MOT Magneto - optical trap QEC quantum error correction qubit quantum bit ...developed and implemented an array of neutral atom qubits in optical traps for studies of quantum error correction. At the end of the three year
Position Error Covariance Matrix Validation and Correction
NASA Technical Reports Server (NTRS)
Frisbee, Joe, Jr.
2016-01-01
In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
An investigation of error characteristics and coding performance
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.
1992-01-01
The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.
Power Measurement Errors on a Utility Aircraft
NASA Technical Reports Server (NTRS)
Bousman, William G.
2002-01-01
Extensive flight test data obtained from two recent performance tests of a UH 60A aircraft are reviewed. A power difference is calculated from the power balance equation and is used to examine power measurement errors. It is shown that the baseline measurement errors are highly non-Gaussian in their frequency distribution and are therefore influenced by additional, unquantified variables. Linear regression is used to examine the influence of other variables and it is shown that a substantial portion of the variance depends upon measurements of atmospheric parameters. Correcting for temperature dependence, although reducing the variance in the measurement errors, still leaves unquantified effects. Examination of the power difference over individual test runs indicates significant errors from drift, although it is unclear how these may be corrected. In an idealized case, where the drift is correctable, it is shown that the power measurement errors are significantly reduced and the error distribution is Gaussian. A new flight test program is recommended that will quantify the thermal environment for all torque measurements on the UH 60. Subsequently, the torque measurement systems will be recalibrated based on the measured thermal environment and a new power measurement assessment performed.
Song, Jong-Won; Hirao, Kimihiko
2015-10-14
Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.
NASA Technical Reports Server (NTRS)
Crozier, Stewart N.
1990-01-01
Random access signaling, which allows slotted packets to spill over into adjacent slots, is investigated. It is shown that sloppy-slotted ALOHA can always provide higher throughput than conventional slotted ALOHA. The degree of improvement depends on the timing error distribution. Throughput performance is presented for Gaussian timing error distributions, modified to include timing error corrections. A general channel capacity lower bound, independent of the specific timing error distribution, is also presented.
A 1400-MHz survey of 1478 Abell clusters of galaxies
NASA Technical Reports Server (NTRS)
Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.
1982-01-01
Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.
On the performance of large Gaussian basis sets for the computation of total atomization energies
NASA Technical Reports Server (NTRS)
Martin, J. M. L.
1992-01-01
The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.
Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.
Menicucci, Nicolas C
2014-03-28
A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.
An investigation of error correcting techniques for OMV and AXAF
NASA Technical Reports Server (NTRS)
Ingels, Frank; Fryer, John
1991-01-01
The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Continuous-variable quantum-key-distribution protocols with a non-Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leverrier, Anthony; Grangier, Philippe; Laboratoire Charles Fabry, Institut d'Optique, CNRS, Univ. Paris-Sud, Campus Polytechnique, RD 128, F-91127 Palaiseau Cedex
2011-04-15
In this paper, we consider continuous-variable quantum-key-distribution (QKD) protocols which use non-Gaussian modulations. These specific modulation schemes are compatible with very efficient error-correction procedures, hence allowing the protocols to outperform previous protocols in terms of achievable range. In their simplest implementation, these protocols are secure for any linear quantum channels (hence against Gaussian attacks). We also show how the use of decoy states makes the protocols secure against arbitrary collective attacks, which implies their unconditional security in the asymptotic limit.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Sathian, K
2018-02-01
In a recent study, Eklund et al. employed resting-state functional magnetic resonance imaging data as a surrogate for null functional magnetic resonance imaging (fMRI) datasets and posited that cluster-wise family-wise error (FWE) rate-corrected inferences made by using parametric statistical methods in fMRI studies over the past two decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; this was principally because the spatial autocorrelation functions (sACF) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggested otherwise. Here, we show that accounting for non-Gaussian signal components such as those arising from resting-state neural activity as well as physiological responses and motion artifacts in the null fMRI datasets yields first- and second-level general linear model analysis residuals with nearly uniform and Gaussian sACF. Further comparison with nonparametric permutation tests indicates that cluster-based FWE corrected inferences made with Gaussian spatial noise approximations are valid.
Comparison of wheat classification accuracy using different classifiers of the image-100 system
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.
1981-01-01
Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.
Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes
NASA Astrophysics Data System (ADS)
Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin
2014-05-01
We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.
New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction
NASA Astrophysics Data System (ADS)
Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.
2017-12-01
Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation
Ruotsalainen, Laura; Kirkko-Jaakkola, Martti; Rantanen, Jesperi; Mäkelä, Maija
2018-01-01
The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM) and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU), sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS) sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF), which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf) in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is tested via two experiments, one at a university’s premises and another in realistic tactical conditions. The results show significant improvement on the horizontal localization when the measurement errors are carefully modelled and their inclusion into the particle filtering implementation correctly realized. PMID:29443918
Long-distance continuous-variable quantum key distribution with a Gaussian modulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jouguet, Paul; SeQureNet, 23 avenue d'Italie, F-75013 Paris; Kunz-Jacques, Sebastien
2011-12-15
We designed high-efficiency error correcting codes allowing us to extract an errorless secret key in a continuous-variable quantum key distribution (CVQKD) protocol using a Gaussian modulation of coherent states and a homodyne detection. These codes are available for a wide range of signal-to-noise ratios on an additive white Gaussian noise channel with a binary modulation and can be combined with a multidimensional reconciliation method proven secure against arbitrary collective attacks. This improved reconciliation procedure considerably extends the secure range of a CVQKD with a Gaussian modulation, giving a secret key rate of about 10{sup -3} bit per pulse at amore » distance of 120 km for reasonable physical parameters.« less
Number-counts slope estimation in the presence of Poisson noise
NASA Technical Reports Server (NTRS)
Schmitt, Juergen H. M. M.; Maccacaro, Tommaso
1986-01-01
The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp
Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular andmore » periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.« less
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
Tropospheric Correction for InSAR Using Interpolated ECMWF Data and GPS Zenith Total Delay
NASA Technical Reports Server (NTRS)
Webb, Frank H.; Fishbein, Evan F.; Moore, Angelyn W.; Owen, Susan E.; Fielding, Eric J.; Granger, Stephanie L.; Bjorndahl, Fredrik; Lofgren Johan
2011-01-01
To mitigate atmospheric errors caused by the troposphere, which is a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging, a tropospheric correction method has been developed using data from the European Centre for Medium- Range Weather Forecasts (ECMWF) and the Global Positioning System (GPS). The ECMWF data was interpolated using a Stretched Boundary Layer Model (SBLM), and ground-based GPS estimates of the tropospheric delay from the Southern California Integrated GPS Network were interpolated using modified Gaussian and inverse distance weighted interpolations. The resulting Zenith Total Delay (ZTD) correction maps have been evaluated, both separately and using a combination of the two data sets, for three short-interval InSAR pairs from Envisat during 2006 on an area stretching from northeast from the Los Angeles basin towards Death Valley. Results show that the root mean square (rms) in the InSAR images was greatly reduced, meaning a significant reduction in the atmospheric noise of up to 32 percent. However, for some of the images, the rms increased and large errors remained after applying the tropospheric correction. The residuals showed a constant gradient over the area, suggesting that a remaining orbit error from Envisat was present. The orbit reprocessing in ROI_pac and the plane fitting both require that the only remaining error in the InSAR image be the orbit error. If this is not fulfilled, the correction can be made anyway, but it will be done using all remaining errors assuming them to be orbit errors. By correcting for tropospheric noise, the biggest error source is removed, and the orbit error becomes apparent and can be corrected for
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
Security of coherent-state quantum cryptography in the presence of Gaussian noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heid, Matthias; Luetkenhaus, Norbert
2007-08-15
We investigate the security against collective attacks of a continuous variable quantum key distribution scheme in the asymptotic key limit for a realistic setting. The quantum channel connecting the two honest parties is assumed to be lossy and imposes Gaussian noise on the observed quadrature distributions. Secret key rates are given for direct and reverse reconciliation schemes including post-selection in the collective attack scenario. The effect of a nonideal error correction and two-way communication in the classical post-processing step is also taken into account.
NASA Astrophysics Data System (ADS)
Huang, Jian; Wei, Kai; Jin, Kai; Li, Min; Zhang, YuDong
2018-06-01
The Sodium laser guide star (LGS) plays a key role in modern astronomical Adaptive Optics Systems (AOSs). The spot size and photon return of the Sodium LGS depend strongly on the laser power density distribution at the Sodium layer and thus affect the performance of the AOS. The power density distribution is degraded by turbulence in the uplink path, launch system aberrations, the beam quality of the laser, and so forth. Even without any aberrations, the TE00 Gaussian type is still not the optimal power density distribution to obtain the best balance between the measurement error and temporal error. To optimize and control the LGS power density distribution at the Sodium layer to an expected distribution type, a method that combines pre-correction and beam-shaping is proposed. A typical result shows that under strong turbulence (Fried parameter (r0) of 5 cm) and for a quasi-continuous wave Sodium laser (power (P) of 15 W), in the best case, our method can effectively optimize the distribution from the Gaussian type to the "top-hat" type and enhance the photon return flux of the Sodium LGS; at the same time, the total error of the AOS is decreased by 36% with our technique for a high power laser and poor seeing.
Analyzing the errors of DFT approximations for compressed water systems
NASA Astrophysics Data System (ADS)
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-07-01
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.
Analyzing the errors of DFT approximations for compressed water systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1973-01-01
An all digital phase locked loop which tracks the phase of the incoming sinusoidal signal once per carrier cycle is proposed. The different elements and their functions and the phase lock operation are explained in detail. The nonlinear difference equations which govern the operation of the digital loop when the incoming signal is embedded in white Gaussian noise are derived, and a suitable model is specified. The performance of the digital loop is considered for the synchronization of a sinusoidal signal. For this, the noise term is suitably modelled which allows specification of the output probabilities for the two level quantizer in the loop at any given phase error. The loop filter considered increases the probability of proper phase correction. The phase error states in modulo two-pi forms a finite state Markov chain which enables the calculation of steady state probabilities, RMS phase error, transient response and mean time for cycle skipping.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea; May, Morgan; Haiman, Zoltán; ...
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω m,w,σ 8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitudemore » smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ sys 2 ≈ 10 -7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg 2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less
Camps, Vicente J; Piñero, David P; Mateo, Veronica; Ribera, David; de Fez, Dolores; Blanes-Mompó, Francisco J; Alzamora-Rodríguez, Antonio
2013-11-01
To calculate theoretically the errors in the estimation of corneal power when using the keratometric index (nk) in eyes that underwent laser refractive surgery for the correction of myopia and to define and validate clinically an algorithm for minimizing such errors. Differences between corneal power estimation by using the classical nk and by using the Gaussian equation in eyes that underwent laser myopic refractive surgery were simulated and evaluated theoretically. Additionally, an adjusted keratometric index (nkadj) model dependent on r1c was developed for minimizing these differences. The model was validated clinically by retrospectively using the data from 32 myopic eyes [range, -1.00 to -6.00 diopters (D)] that had undergone laser in situ keratomileusis using a solid-state laser platform. The agreement between Gaussian (Pc) and adjusted keratometric (Pkadj) corneal powers in such eyes was evaluated. It was found that overestimations of corneal power up to 3.5 D were possible for nk = 1.3375 according to our simulations. The nk value to avoid the keratometric error ranged between 1.2984 and 1.3297. The following nkadj models were obtained: nkadj = -0.0064286r1c + 1.37688 (Gullstrand eye model) and nkadj = -0.0063804r1c + 1.37806 (Le Grand). The mean difference between Pkadj and Pc was 0.00 D, with limits of agreement of -0.45 and +0.46 D. This difference correlated significantly with the posterior corneal radius (r = -0.94, P < 0.01). The use of a single nk for estimating the corneal power in eyes that underwent a laser myopic refractive surgery can lead to significant errors. These errors can be minimized by using a variable nk dependent on r1c.
NASA Astrophysics Data System (ADS)
Li, Ming; Wang, Q. J.; Bennett, James C.; Robertson, David E.
2016-09-01
This study develops a new error modelling method for ensemble short-term and real-time streamflow forecasting, called error reduction and representation in stages (ERRIS). The novelty of ERRIS is that it does not rely on a single complex error model but runs a sequence of simple error models through four stages. At each stage, an error model attempts to incrementally improve over the previous stage. Stage 1 establishes parameters of a hydrological model and parameters of a transformation function for data normalization, Stage 2 applies a bias correction, Stage 3 applies autoregressive (AR) updating, and Stage 4 applies a Gaussian mixture distribution to represent model residuals. In a case study, we apply ERRIS for one-step-ahead forecasting at a range of catchments. The forecasts at the end of Stage 4 are shown to be much more accurate than at Stage 1 and to be highly reliable in representing forecast uncertainty. Specifically, the forecasts become more accurate by applying the AR updating at Stage 3, and more reliable in uncertainty spread by using a mixture of two Gaussian distributions to represent the residuals at Stage 4. ERRIS can be applied to any existing calibrated hydrological models, including those calibrated to deterministic (e.g. least-squares) objectives.
NASA Astrophysics Data System (ADS)
Vicent, Jorge; Alonso, Luis; Sabater, Neus; Miesch, Christophe; Kraft, Stefan; Moreno, Jose
2015-09-01
The uncertainties in the knowledge of the Instrument Spectral Response Function (ISRF), barycenter of the spectral channels and bandwidth / spectral sampling (spectral resolution) are important error sources in the processing of satellite imaging spectrometers within narrow atmospheric absorption bands. The exhaustive laboratory spectral characterization is a costly engineering process that differs from the instrument configuration in-flight given the harsh space environment and harmful launching phase. The retrieval schemes at Level-2 commonly assume a Gaussian ISRF, leading to uncorrected spectral stray-light effects and wrong characterization and correction of the spectral shift and smile. These effects produce inaccurate atmospherically corrected data and are propagated to the final Level-2 mission products. Within ESA's FLEX satellite mission activities, the impact of the ISRF knowledge error and spectral calibration at Level-1 products and its propagation to Level-2 retrieved chlorophyll fluorescence has been analyzed. A spectral recalibration scheme has been implemented at Level-2 reducing the errors in Level-1 products below the 10% error in retrieved fluorescence within the oxygen absorption bands enhancing the quality of the retrieved products. The work presented here shows how the minimization of the spectral calibration errors requires an effort both for the laboratory characterization and for the implementation of specific algorithms at Level-2.
NASA Technical Reports Server (NTRS)
Lien, Guo-Yuan; Kalnay, Eugenia; Miyoshi, Takemasa; Huffman, George J.
2016-01-01
Assimilation of satellite precipitation data into numerical models presents several difficulties, with two of the most important being the non-Gaussian error distributions associated with precipitation, and large model and observation errors. As a result, improving the model forecast beyond a few hours by assimilating precipitation has been found to be difficult. To identify the challenges and propose practical solutions to assimilation of precipitation, statistics are calculated for global precipitation in a low-resolution NCEP Global Forecast System (GFS) model and the TRMM Multisatellite Precipitation Analysis (TMPA). The samples are constructed using the same model with the same forecast period, observation variables, and resolution as in the follow-on GFSTMPA precipitation assimilation experiments presented in the companion paper.The statistical results indicate that the T62 and T126 GFS models generally have positive bias in precipitation compared to the TMPA observations, and that the simulation of the marine stratocumulus precipitation is not realistic in the T62 GFS model. It is necessary to apply to precipitation either the commonly used logarithm transformation or the newly proposed Gaussian transformation to obtain a better relationship between the model and observational precipitation. When the Gaussian transformations are separately applied to the model and observational precipitation, they serve as a bias correction that corrects the amplitude-dependent biases. In addition, using a spatially andor temporally averaged precipitation variable, such as the 6-h accumulated precipitation, should be advantageous for precipitation assimilation.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
Limits of quantitation - Yet another suggestion
NASA Astrophysics Data System (ADS)
Carlson, Jill; Wysoczanski, Artur; Voigtman, Edward
2014-06-01
The work presented herein suggests that the limit of quantitation concept may be rendered substantially less ambiguous and ultimately more useful as a figure of merit by basing it upon the significant figure and relative measurement error ideas due to Coleman, Auses and Gram, coupled with the correct instantiation of Currie's detection limit methodology. Simple theoretical results are presented for a linear, univariate chemical measurement system with homoscedastic Gaussian noise, and these are tested against both Monte Carlo computer simulations and laser-excited molecular fluorescence experimental results. Good agreement among experiment, theory and simulation is obtained and an easy extension to linearly heteroscedastic Gaussian noise is also outlined.
Bachman, Daniel; Chen, Zhijiang; Wang, Christopher; ...
2016-11-29
Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
Lifting primordial non-Gaussianity above the noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Welling, Yvette; Woude, Drian van der; Pajer, Enrico, E-mail: welling@strw.leidenuniv.nl, E-mail: D.C.vanderWoude@uu.nl, E-mail: enrico.pajer@gmail.com
2016-08-01
Primordial non-Gaussianity (PNG) in Large Scale Structures is obfuscated by the many additional sources of non-linearity. Within the Effective Field Theory approach to Standard Perturbation Theory, we show that matter non-linearities in the bispectrum can be modeled sufficiently well to strengthen current bounds with near future surveys, such as Euclid. We find that the EFT corrections are crucial to this improvement in sensitivity. Yet, our understanding of non-linearities is still insufficient to reach important theoretical benchmarks for equilateral PNG, while, for local PNG, our forecast is more optimistic. We consistently account for the theoretical error intrinsic to the perturbative approachmore » and discuss the details of its implementation in Fisher forecasts.« less
Analysis of soft-decision FEC on non-AWGN channels.
Cho, Junho; Xie, Chongjin; Winzer, Peter J
2012-03-26
Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Kunwar Pal, E-mail: k-psingh@yahoo.com; Department of Physics, Shri Venkateshwara University, Gajraula, Amroha, Uttar Pradesh 244236; Arya, Rashmi
2015-09-14
We have investigated the effect of initial phase on error in electron energy obtained using paraxial approximation to study electron acceleration by a focused laser pulse in vacuum using a three dimensional test-particle simulation code. The error is obtained by comparing the energy of the electron for paraxial approximation and seventh-order correction description of the fields of Gaussian laser. The paraxial approximation predicts wrong laser divergence and wrong electron escape time from the pulse which leads to prediction of higher energy. The error shows strong phase dependence for the electrons lying along the axis of the laser for linearly polarizedmore » laser pulse. The relative error may be significant for some specific values of initial phase even at moderate values of laser spot sizes. The error does not show initial phase dependence for a circularly laser pulse.« less
On the effective field theory for quasi-single field inflation
NASA Astrophysics Data System (ADS)
Tong, Xi; Wang, Yi; Zhou, Siyi
2017-11-01
We study the effective field theory (EFT) description of the virtual particle effects in quasi-single field inflation, which unifies the previous results on large mass and large mixing cases. By using a horizon crossing approximation and matching with known limits, approximate expressions for the power spectrum and the spectral index are obtained. The error of the approximate solution is within 10% in dominate parts of the parameter space, which corresponds to less-than-0.1% error in the ns-r diagram. The quasi-single field corrections on the ns-r diagram are plotted for a few inflation models. Especially, the quasi-single field correction drives m2phi2 inflation to the best fit region on the ns-r diagram, with an amount of equilateral non-Gaussianity which can be tested in future experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
NASA Astrophysics Data System (ADS)
Rebolledo Coy, M. A.; Villanueva, O. M. B.; Bartz-Beielstein, T.; Ribbe, L.
2017-12-01
Rainfall measurement plays an important role on the understanding and modeling of the water cycle. However, the assessment of scarce data regions using common rain gauge information, cannot be done using a straightforward approach. Some of the main problems concerning rainfall assessment are; the lack of a sufficiently dense grid of ground stations in extensive areas and the unstable spatial accuracy of the Satellite Rainfall Estimates (SREs). Following previous works on SREs analysis and bias-correction, we generate an ensemble model that corrects the bias error on a seasonal and yearly basis using six different state-of-the-art SREs (TRMM 3B42RT, TRMM 3B42v7, PERSIANN-CDR, CHIRPSv2, CMORPH and MSWEPv1.2) in a point-to-pixel approach for the studied period (2003-2015). Three different basins; Magdalena in Colombia, Imperial in Chile and Paraiba do Sul in Brazil are evaluated. Using Gaussian process regression and Bayesian robust regression we model the behavior of the ground stations and evaluate its goodness-of-fit by using the modified Kling-Gupta efficiency (KGE'). Following this evaluation, the models are re-fitted by taking into account the error distribution in each point and the corresponding KGE' is evaluated again. Both models were specified using the probabilistic language STAN. To improve the efficiency of the Gaussian model a clustering of the data was implemented. We also compared the performance of both models in term of uncertainty and stability against the raw input concluding that both models represent better the study areas. The results show that the error displays an exponential behavior for days where precipitation was present, this allows the models to be corrected according to the observed rainfall values. The seasonal evaluations also show improved performance in relation to the yearly evaluations. The use of bias-corrected SREs for hydrologic purposes in scarce data regions is highly recommended in order to merge the punctual values from the ground measurements and the spatial distribution of rainfall from the satellite estimates.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence
Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos
2016-01-01
When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865
Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Lacey, Simon; Sathian, K
2018-02-01
In a recent study Eklund et al. have shown that cluster-wise family-wise error (FWE) rate-corrected inferences made in parametric statistical method-based functional magnetic resonance imaging (fMRI) studies over the past couple of decades may have been invalid, particularly for cluster defining thresholds less stringent than p < 0.001; principally because the spatial autocorrelation functions (sACFs) of fMRI data had been modeled incorrectly to follow a Gaussian form, whereas empirical data suggest otherwise. Hence, the residuals from general linear model (GLM)-based fMRI activation estimates in these studies may not have possessed a homogenously Gaussian sACF. Here we propose a method based on the assumption that heterogeneity and non-Gaussianity of the sACF of the first-level GLM analysis residuals, as well as temporal autocorrelations in the first-level voxel residual time-series, are caused by unmodeled MRI signal from neuronal and physiological processes as well as motion and other artifacts, which can be approximated by appropriate decompositions of the first-level residuals with principal component analysis (PCA), and removed. We show that application of this method yields GLM residuals with significantly reduced spatial correlation, nearly Gaussian sACF and uniform spatial smoothness across the brain, thereby allowing valid cluster-based FWE-corrected inferences based on assumption of Gaussian spatial noise. We further show that application of this method renders the voxel time-series of first-level GLM residuals independent, and identically distributed across time (which is a necessary condition for appropriate voxel-level GLM inference), without having to fit ad hoc stochastic colored noise models. Furthermore, the detection power of individual subject brain activation analysis is enhanced. This method will be especially useful for case studies, which rely on first-level GLM analysis inferences.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.
[Gaussian process regression and its application in near-infrared spectroscopy analysis].
Feng, Ai-Ming; Fang, Li-Min; Lin, Min
2011-06-01
Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao
Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Chengming; Yan, Yihua; Tan, Baolin
This work presents a systematic investigation of the influence of weather conditions on the calibration errors by using Gaussian fitness, least chi-square linear fitness, and wavelet transform to analyze the calibration coefficients from observations of the Chinese Solar Broadband Radio Spectrometers (at frequency bands of 1.0–2.0 GHz, 2.6–3.8 GHz, and 5.2–7.6 GHz) during 1997–2007. We found that calibration coefficients are influenced by the local air temperature. Considering the temperature correction, the calibration error will reduce by about 10%–20% at 2800 MHz. Based on the above investigation and the calibration corrections, we further study the radio emission of the quiet Sunmore » by using an appropriate hybrid model of the quiet-Sun atmosphere. The results indicate that the numerical flux of the hybrid model is much closer to the observation flux than that of other ones.« less
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Hayashi, Norio; Miyati, Tosiaki; Takanaga, Masako; Ohno, Naoki; Hamaguchi, Takashi; Kozaka, Kazuto; Sanada, Shigeru; Yamamoto, Tomoyuki; Matsui, Osamu
2011-01-01
In the direction where the phased array coil used in parallel magnetic resonance imaging (MRI) is perpendicular to the arrangement, sensitivity falls significantly. Moreover, in a 3.0 tesla (3T) abdominal MRI, the quality of the image is reduced by changes in the relaxation time, reinforcement of the magnetic susceptibility effect, etc. In a 3T MRI, which has a high resonant frequency, the signal of the depths (central part) is reduced in the trunk part. SCIC, which is sensitivity correction processing, has inadequate correction processing, such as that edges are emphasized and the central part is corrected. Therefore, we used 3T with a Gaussian distribution. The uneven compensation processing for sensitivity of an abdomen MR image was considered. The correction processing consisted of the following methods. 1) The center of gravity of the domain of the human body in an abdomen MR image was calculated. 2) The correction coefficient map was created from the center of gravity using the Gaussian distribution. 3) The sensitivity correction image was created from the correction coefficient map and the original picture image. Using the Gaussian correction to process the image, the uniformity calculated using the NEMA method was improved significantly compared to the original image of a phantom. In a visual evaluation by radiologists, the uniformity was improved significantly using the Gaussian correction processing. Because of the homogeneous improvement of the abdomen image taken using 3T MRI, the Gaussian correction processing is considered to be a very useful technique.
High dimensional linear regression models under long memory dependence and measurement error
NASA Astrophysics Data System (ADS)
Kaul, Abhishek
This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.
NASA Astrophysics Data System (ADS)
Dementev, A. O.; Dmitriev, E. V.; Kozoderov, V. V.; Egorov, V. D.
2017-10-01
Hyperspectral imaging is up-to-date promising technology widely applied for the accurate thematic mapping. The presence of a large number of narrow survey channels allows us to use subtle differences in spectral characteristics of objects and to make a more detailed classification than in the case of using standard multispectral data. The difficulties encountered in the processing of hyperspectral images are usually associated with the redundancy of spectral information which leads to the problem of the curse of dimensionality. Methods currently used for recognizing objects on multispectral and hyperspectral images are usually based on standard base supervised classification algorithms of various complexity. Accuracy of these algorithms can be significantly different depending on considered classification tasks. In this paper we study the performance of ensemble classification methods for the problem of classification of the forest vegetation. Error correcting output codes and boosting are tested on artificial data and real hyperspectral images. It is demonstrates, that boosting gives more significant improvement when used with simple base classifiers. The accuracy in this case in comparable the error correcting output code (ECOC) classifier with Gaussian kernel SVM base algorithm. However the necessity of boosting ECOC with Gaussian kernel SVM is questionable. It is demonstrated, that selected ensemble classifiers allow us to recognize forest species with high enough accuracy which can be compared with ground-based forest inventory data.
NASA Astrophysics Data System (ADS)
Jeong, Hyunjo; Zhang, Shuzeng; Barnard, Dan; Li, Xiongbing
2015-09-01
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α2 ≃ 2α1.
An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-01-01
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197
An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-08-21
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Matalgah, Mustafa M; Bobrek, Miljko
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subjectmore » to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).« less
Virtual sensors for robust on-line monitoring (OLM) and Diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Lerchen, Megan E.; Ramuhalli, Pradeep
Unscheduled shutdown of nuclear power facilities for recalibration and replacement of faulty sensors can be expensive and disruptive to grid management. In this work, we present virtual (software) sensors that can replace a faulty physical sensor for a short duration thus allowing recalibration to be safely deferred to a later time. The virtual sensor model uses a Gaussian process model to process input data from redundant and other nearby sensors. Predicted data includes uncertainty bounds including spatial association uncertainty and measurement noise and error. Using data from an instrumented cooling water flow loop testbed, the virtual sensor model has predictedmore » correct sensor measurements and the associated error corresponding to a faulty sensor.« less
A TDM link with channel coding and digital voice.
NASA Technical Reports Server (NTRS)
Jones, M. W.; Tu, K.; Harton, P. L.
1972-01-01
The features of a TDM (time-division multiplexed) link model are described. A PCM telemetry sequence was coded for error correction and multiplexed with a digitized voice channel. An all-digital implementation of a variable-slope delta modulation algorithm was used to digitize the voice channel. The results of extensive testing are reported. The measured coding gain and the system performance over a Gaussian channel are compared with theoretical predictions and computer simulations. Word intelligibility scores are reported as a measure of voice channel performance.
NASA Astrophysics Data System (ADS)
Pintado, O. I.; Santillán, L.; Marquetti, M. E.
All images obtained with a telescope are distorted by the instrument. This distorsion is known as instrumental profile or instrumental broadening. The deformations in the spectra could introduce large errors in the determination of different parameters, especially in those dependent on the spectral lines shapes, such as chemical abundances, winds, microturbulence, etc. To correct this distortion, in some cases, the spectral lines are convolved with a Gaussian function and in others the lines are widened with a fixed value. Some codes used to calculate synthetic spectra, as SYNTHE, include this corrections. We present results obtained for the spectrograph REOSC and EBASIM of CASLEO.
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.
Lin, Chuan-Kai; Wang, Sheng-De
2004-11-01
A new autopilot design for bank-to-turn (BTT) missiles is presented. In the design of autopilot, a ridge Gaussian neural network with local learning capability and fewer tuning parameters than Gaussian neural networks is proposed to model the controlled nonlinear systems. We prove that the proposed ridge Gaussian neural network, which can be a universal approximator, equals the expansions of rotated and scaled Gaussian functions. Although ridge Gaussian neural networks can approximate the nonlinear and complex systems accurately, the small approximation errors may affect the tracking performance significantly. Therefore, by employing the Hinfinity control theory, it is easy to attenuate the effects of the approximation errors of the ridge Gaussian neural networks to a prescribed level. Computer simulation results confirm the effectiveness of the proposed ridge Gaussian neural networks-based autopilot with Hinfinity stabilization.
NASA Astrophysics Data System (ADS)
Bukhari, W.; Hong, S.-M.
2016-03-01
The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit at the cost of reduced duty cycle. The error reduction allows the clinical target volume to planning target volume (CTV-PTV) margin to be reduced, leading to decreased normal-tissue toxicity and possible dose escalation. The CTV-PTV margin is also evaluated to quantify clinical benefits of EKF-GPRN+ prediction.
NASA Astrophysics Data System (ADS)
Bukhari, W.; Hong, S.-M.
2015-01-01
Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in percent ratios relative to no prediction for a duty cycle of 80% at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The experiments also confirm that EKF-GPR+ controls the duty cycle with reasonable accuracy.
Loop corrections to primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Boran, Sibel; Kahya, E. O.
2018-02-01
We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.
Topology in two dimensions. II - The Abell and ACO cluster catalogues
NASA Astrophysics Data System (ADS)
Plionis, Manolis; Valdarnini, Riccardo; Coles, Peter
1992-09-01
We apply a method for quantifying the topology of projected galaxy clustering to the Abell and ACO catalogues of rich clusters. We use numerical simulations to quantify the statistical bias involved in using high peaks to define the large-scale structure, and we use the results obtained to correct our observational determinations for this known selection effect and also for possible errors introduced by boundary effects. We find that the Abell cluster sample is consistent with clusters being identified with high peaks of a Gaussian random field, but that the ACO shows a slight meatball shift away from the Gaussian behavior over and above that expected purely from the high-peak selection. The most conservative explanation of this effect is that it is caused by some artefact of the procedure used to select the clusters in the two samples.
First-principles energetics of water clusters and ice: A many-body analysis
NASA Astrophysics Data System (ADS)
Gillan, M. J.; Alfè, D.; Bartók, A. P.; Csányi, G.
2013-12-01
Standard forms of density-functional theory (DFT) have good predictive power for many materials, but are not yet fully satisfactory for cluster, solid, and liquid forms of water. Recent work has stressed the importance of DFT errors in describing dispersion, but we note that errors in other parts of the energy may also contribute. We obtain information about the nature of DFT errors by using a many-body separation of the total energy into its 1-body, 2-body, and beyond-2-body components to analyze the deficiencies of the popular PBE and BLYP approximations for the energetics of water clusters and ice structures. The errors of these approximations are computed by using accurate benchmark energies from the coupled-cluster technique of molecular quantum chemistry and from quantum Monte Carlo calculations. The systems studied are isomers of the water hexamer cluster, the crystal structures Ih, II, XV, and VIII of ice, and two clusters extracted from ice VIII. For the binding energies of these systems, we use the machine-learning technique of Gaussian Approximation Potentials to correct successively for 1-body and 2-body errors of the DFT approximations. We find that even after correction for these errors, substantial beyond-2-body errors remain. The characteristics of the 2-body and beyond-2-body errors of PBE are completely different from those of BLYP, but the errors of both approximations disfavor the close approach of non-hydrogen-bonded monomers. We note the possible relevance of our findings to the understanding of liquid water.
Non-Gaussianity in a quasiclassical electronic circuit
NASA Astrophysics Data System (ADS)
Suzuki, Takafumi J.; Hayakawa, Hisao
2017-05-01
We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.
Assessing Gaussian Assumption of PMU Measurement Error Using Field Data
Wang, Shaobu; Zhao, Junbo; Huang, Zhenyu; ...
2017-10-13
Gaussian PMU measurement error has been assumed for many power system applications, such as state estimation, oscillatory modes monitoring, voltage stability analysis, to cite a few. This letter proposes a simple yet effective approach to assess this assumption by using the stability property of a probability distribution and the concept of redundant measurement. Extensive results using field PMU data from WECC system reveal that the Gaussian assumption is questionable.
Gaussian error correction of quantum states in a correlated noisy channel.
Lassen, Mikael; Berni, Adriano; Madsen, Lars S; Filip, Radim; Andersen, Ulrik L
2013-11-01
Noise is the main obstacle for the realization of fault-tolerant quantum information processing and secure communication over long distances. In this work, we propose a communication protocol relying on simple linear optics that optimally protects quantum states from non-Markovian or correlated noise. We implement the protocol experimentally and demonstrate the near-ideal protection of coherent and entangled states in an extremely noisy channel. Since all real-life channels are exhibiting pronounced non-Markovian behavior, the proposed protocol will have immediate implications in improving the performance of various quantum information protocols.
Errors associated with fitting Gaussian profiles to noisy emission-line spectra
NASA Technical Reports Server (NTRS)
Lenz, Dawn D.; Ayres, Thomas R.
1992-01-01
Landman et al. (1982) developed prescriptions to predict profile fitting errors for Gaussian emission lines perturbed by white noise. We show that their scaling laws can be generalized to more complicated signal-dependent 'noise models' of common astronomical detector systems.
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose
Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-01-01
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910
A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.
Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse
2017-09-12
Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.
Semi-supervised anomaly detection - towards model-independent searches of new physics
NASA Astrophysics Data System (ADS)
Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu
2012-06-01
Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.
An improved pi/4-QPSK with nonredundant error correction for satellite mobile broadcasting
NASA Technical Reports Server (NTRS)
Feher, Kamilo; Yang, Jiashi
1991-01-01
An improved pi/4-quadrature phase-shift keying (QPSK) receiver that incorporates a simple nonredundant error correction (NEC) structure is proposed for satellite and land-mobile digital broadcasting. The bit-error-rate (BER) performance of the pi/4-QPSK with NEC is analyzed and evaluated in a fast Rician fading and additive white Gaussian noise (AWGN) environment using computer simulation. It is demonstrated that with simple electronics the performance of a noncoherently detected pi/4-QPSK signal in both AWGN and fast Rician fading can be improved. When the K-factor (a ratio of average power of multipath signal to direct path power) of the Rician channel decreases, the improvement increases. An improvement of 1.2 dB could be obtained at a BER of 0.0001 in the AWGN channel. This performance gain is achieved without requiring any signal redundancy and additional bandwidth. Three types of noncoherent detection schemes of pi/4-QPSK with NEC structure, such as IF band differential detection, baseband differential detection, and FM discriminator, are discussed. It is concluded that the pi/4-QPSK with NEC is an attractive scheme for power-limited satellite land-mobile broadcasting systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albert, J; Labarbe, R; Sterpin, E
2016-06-15
Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less
Hoy, Robert S; Foteinopoulou, Katerina; Kröger, Martin
2009-09-01
Primitive path analyses of entanglements are performed over a wide range of chain lengths for both bead spring and atomistic polyethylene polymer melts. Estimators for the entanglement length N_{e} which operate on results for a single chain length N are shown to produce systematic O(1/N) errors. The mathematical roots of these errors are identified as (a) treating chain ends as entanglements and (b) neglecting non-Gaussian corrections to chain and primitive path dimensions. The prefactors for the O(1/N) errors may be large; in general their magnitude depends both on the polymer model and the method used to obtain primitive paths. We propose, derive, and test new estimators which eliminate these systematic errors using information obtainable from the variation in entanglement characteristics with chain length. The new estimators produce accurate results for N_{e} from marginally entangled systems. Formulas based on direct enumeration of entanglements appear to converge faster and are simpler to apply.
NASA Astrophysics Data System (ADS)
Chen, Y.; Xu, X.
2017-12-01
The broad band Lg 1/Q tomographic models in eastern Eurasia are inverted from source- and site-corrected path 1/Q data. The path 1/Q are measured between stations (or events) by the two-station (TS), reverse two-station (RTS) and reverse two-event (RTE) methods, respectively. Because path 1/Q are computed using logarithm of the product of observed spectral ratios and simplified 1D geometrical spreading correction, they are subject to "modeling errors" dominated by uncompensated 3D structural effects. We have found in Chen and Xie [2017] that these errors closely follow normal distribution after the long-tailed outliers are screened out (similar to teleseismic travel time residuals). We thus rigorously analyze the statistics of these errors collected from repeated samplings of station (and event) pairs from 1.0 to 10.0Hz and reject about 15% outliers at each frequency band. The resultant variance of Δ/Q decreases with frequency as 1/f2. The 1/Q tomography using screened data is now a stochastic inverse problem with solutions approximate the means of Gaussian random variables and the model covariance matrix is that of Gaussian variables with well-known statistical behavior. We adopt a new SVD based tomographic method to solve for 2D Q image together with its resolution and covariance matrices. The RTS and RTE yield the most reliable 1/Q data free of source and site effects, but the path coverage is rather sparse due to very strict recording geometry. The TS absorbs the effects of non-unit site response ratios into 1/Q data. The RTS also yields site responses, which can then be corrected from the path 1/Q of TS to make them also free of site effect. The site corrected TS data substantially improve path coverage, allowing able to solve for 1/Q tomography up to 6.0Hz. The model resolution and uncertainty are first quantitively accessed by spread functions (fulfilled by resolution matrix) and covariance matrix. The reliably retrieved Q models correlate well with the distinct tectonic blocks featured by the most recent major deformations and vary with frequencies. With the 1/Q tomographic model and its covariance matrix, we can formally estimate the uncertainty of any path-specific Lg 1/Q prediction. This new capability significantly benefits source estimation for which reliable uncertainty estimate is especially important.
Development of an algorithm for corneal reshaping with a scanning laser beam
NASA Astrophysics Data System (ADS)
Shen, Jin-Hui; Söderberg, Per; Matsui, Takaaki; Manns, Fabrice; Parel, Jean-Marie
1995-07-01
The corneal-ablation rate, the beam-intensity distribution, and the initial and the desired corneal topographies are used to calculate a spatial distribution map of laser pulses. The optimal values of the parameters are determined with a computer model, for a system that produces 213-nm radiation with a Gaussian beam-intensity distribution and a peak radiant exposure of 400 mJ/cm2. The model shows that with a beam diameter of 0.5 mm, an overlap of 80%, and a 5-mm treatment zone, the roughness is less than 6% of the central ablation depth, the refractive error after correction is less than 0.1 D for corrections of myopia of 1, 3, and 6 D and less than 0.4 D for a correction of myopia of 10 D, and the number of pulses per diopter of
NASA Astrophysics Data System (ADS)
Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco
2005-01-01
We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.
Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance
NASA Astrophysics Data System (ADS)
Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman
2016-02-01
The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.
Robust Linear Models for Cis-eQTL Analysis.
Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C
2015-01-01
Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.
Some error bounds for K-iterated Gaussian recursive filters
NASA Astrophysics Data System (ADS)
Cuomo, Salvatore; Galletti, Ardelio; Giunta, Giulio; Marcellino, Livia
2016-10-01
Recursive filters (RFs) have achieved a central role in several research fields over the last few years. For example, they are used in image processing, in data assimilation and in electrocardiogram denoising. More in particular, among RFs, the Gaussian RFs are an efficient computational tool for approximating Gaussian-based convolutions and are suitable for digital image processing and applications of the scale-space theory. As is a common knowledge, the Gaussian RFs, applied to signals with support in a finite domain, generate distortions and artifacts, mostly localized at the boundaries. Heuristic and theoretical improvements have been proposed in literature to deal with this issue (namely boundary conditions). They include the case in which a Gaussian RF is applied more than once, i.e. the so called K-iterated Gaussian RFs. In this paper, starting from a summary of the comprehensive mathematical background, we consider the case of the K-iterated first-order Gaussian RF and provide the study of its numerical stability and some component-wise theoretical error bounds.
On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
Multi-pose facial correction based on Gaussian process with combined kernel function
NASA Astrophysics Data System (ADS)
Shi, Shuyan; Ji, Ruirui; Zhang, Fan
2018-04-01
In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.
Effect of lensing non-Gaussianity on the CMB power spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Antony; Pratten, Geraint, E-mail: antony@cosmologist.info, E-mail: geraint.pratten@gmail.com
2016-12-01
Observed CMB anisotropies are lensed, and the lensed power spectra can be calculated accurately assuming the lensing deflections are Gaussian. However, the lensing deflections are actually slightly non-Gaussian due to both non-linear large-scale structure growth and post-Born corrections. We calculate the leading correction to the lensed CMB power spectra from the non-Gaussianity, which is determined by the lensing bispectrum. Assuming no primordial non-Gaussianity, the lowest-order result gives ∼ 0.3% corrections to the BB and EE polarization spectra on small-scales. However we show that the effect on EE is reduced by about a factor of two by higher-order Gaussian lensing smoothing,more » rendering the total effect safely negligible for the foreseeable future. We give a simple analytic model for the signal expected from skewness of the large-scale lensing field; the effect is similar to a net demagnification and hence a small change in acoustic scale (and therefore out of phase with the dominant lensing smoothing that predominantly affects the peaks and troughs of the power spectrum).« less
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-06-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.
BAO from Angular Clustering: Optimization and Mitigation of Theoretical Systematics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crocce, M.; et al.
We study the theoretical systematics and optimize the methodology in Baryon Acoustic Oscillations (BAO) detections using the angular correlation function with tomographic bins. We calibrate and optimize the pipeline for the Dark Energy Survey Year 1 dataset using 1800 mocks. We compare the BAO fitting results obtained with three estimators: the Maximum Likelihood Estimator (MLE), Profile Likelihood, and Markov Chain Monte Carlo. The MLE method yields the least bias in the fit results (bias/spreadmore » $$\\sim 0.02$$) and the error bar derived is the closest to the Gaussian results (1% from 68% Gaussian expectation). When there is mismatch between the template and the data either due to incorrect fiducial cosmology or photo-$z$ error, the MLE again gives the least-biased results. The BAO angular shift that is estimated based on the sound horizon and the angular diameter distance agree with the numerical fit. Various analysis choices are further tested: the number of redshift bins, cross-correlations, and angular binning. We propose two methods to correct the mock covariance when the final sample properties are slightly different from those used to create the mock. We show that the sample changes can be accommodated with the help of the Gaussian covariance matrix or more effectively using the eigenmode expansion of the mock covariance. The eigenmode expansion is significantly less susceptible to statistical fluctuations relative to the direct measurements of the covariance matrix because the number of free parameters is substantially reduced [$p$ parameters versus $p(p+1)/2$ from direct measurement].« less
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
Leading non-Gaussian corrections for diffusion orientation distribution function.
Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali
2014-02-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.
Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function
Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali
2014-01-01
An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
Gaussian Hypothesis Testing and Quantum Illumination.
Wilde, Mark M; Tomamichel, Marco; Lloyd, Seth; Berta, Mario
2017-09-22
Quantum hypothesis testing is one of the most basic tasks in quantum information theory and has fundamental links with quantum communication and estimation theory. In this paper, we establish a formula that characterizes the decay rate of the minimal type-II error probability in a quantum hypothesis test of two Gaussian states given a fixed constraint on the type-I error probability. This formula is a direct function of the mean vectors and covariance matrices of the quantum Gaussian states in question. We give an application to quantum illumination, which is the task of determining whether there is a low-reflectivity object embedded in a target region with a bright thermal-noise bath. For the asymmetric-error setting, we find that a quantum illumination transmitter can achieve an error probability exponent stronger than a coherent-state transmitter of the same mean photon number, and furthermore, that it requires far fewer trials to do so. This occurs when the background thermal noise is either low or bright, which means that a quantum advantage is even easier to witness than in the symmetric-error setting because it occurs for a larger range of parameters. Going forward from here, we expect our formula to have applications in settings well beyond those considered in this paper, especially to quantum communication tasks involving quantum Gaussian channels.
NASA Astrophysics Data System (ADS)
Schellenberg, Graham; Stortz, Greg; Goertzen, Andrew L.
2016-02-01
A typical positron emission tomography detector is comprised of a scintillator crystal array coupled to a photodetector array or other position sensitive detector. Such detectors using light sharing to read out crystal elements require the creation of a crystal lookup table (CLUT) that maps the detector response to the crystal of interaction based on the x-y position of the event calculated through Anger-type logic. It is vital for system performance that these CLUTs be accurate so that the location of events can be accurately identified and so that crystal-specific corrections, such as energy windowing or time alignment, can be applied. While using manual segmentation of the flood image to create the CLUT is a simple and reliable approach, it is both tedious and time consuming for systems with large numbers of crystal elements. In this work we describe the development of an automated algorithm for CLUT generation that uses a Gaussian mixture model paired with thin plate splines (TPS) to iteratively fit a crystal layout template that includes the crystal numbering pattern. Starting from a region of stability, Gaussians are individually fit to data corresponding to crystal locations while simultaneously updating a TPS for predicting future Gaussian locations at the edge of a region of interest that grows as individual Gaussians converge to crystal locations. The algorithm was tested with flood image data collected from 16 detector modules, each consisting of a 409 crystal dual-layer offset LYSO crystal array readout by a 32 pixel SiPM array. For these detector flood images, depending on user defined input parameters, the algorithm runtime ranged between 17.5-82.5 s per detector on a single core of an Intel i7 processor. The method maintained an accuracy above 99.8% across all tests, with the majority of errors being localized to error prone corner regions. This method can be easily extended for use with other detector types through adjustment of the initial template model used.
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachman, Daniel; Chen, Zhijiang; Wang, Christopher
Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less
Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.
Minin, Serge; Kamalabadi, Farzad
2009-12-20
We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.
Robust estimation of adaptive tensors of curvature by tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung
2005-03-01
Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.
Influence of Ametropia and Its Correction on Measurement of Accommodation.
Bernal-Molina, Paula; Vargas-Martín, Fernando; Thibos, Larry N; López-Gil, Norberto
2016-06-01
Amplitude of accommodation (AA) is reportedly greater for myopic eyes than for hyperopic eyes. We investigated potential explanations for this difference. Analytical analysis and computer ray tracing were performed on two schematic eye models of axial ametropia. Using paraxial and nonparaxial approaches, AA was specified for the naked and the corrected eye using the anterior corneal surface as the reference plane. Assuming that axial myopia is due entirely to an increase in vitreous chamber depth, AA increases with the amount of myopia for two reasons that have not always been taken into account. First is the choice of reference location for specifying refractive error and AA in diopters. When specified relative to the cornea, AA increases with the degree of myopia more than when specified relative to the eye's first Gaussian principal plane. The second factor is movement of the eye's second Gaussian principal plane toward the retina during accommodation, which has a larger dioptric effect in shorter eyes. Using the corneal plane (placed at the corneal vertex) as the reference plane for specifying accommodation, AA depends slightly on the axial length of the eye's vitreous chamber. This dependency can be reduced significantly by using a reference plane located 4 mm posterior to the corneal plane. A simple formula is provided to help clinicians and researchers obtain a value of AA that closely reflects power changes of the crystalline lens, independent of axial ametropia and its correction with lenses.
Device and method for creating Gaussian aberration-corrected electron beams
McMorran, Benjamin; Linck, Martin
2016-01-19
Electron beam phase gratings have phase profiles that produce a diffracted beam having a Gaussian or other selected intensity profile. Phase profiles can also be selected to correct or compensate electron lens aberrations. Typically, a low diffraction order produces a suitable phase profile, and other orders are discarded.
Acquisition, representation, and transfer of models of visuo-motor error
Zhang, Hang; Kulsa, Mila Kirstie C.; Maloney, Laurence T.
2015-01-01
We examined how human subjects acquire and represent models of visuo-motor error and how they transfer information about visuo-motor error from one task to a closely related one. The experiment consisted of three phases. In the training phase, subjects threw beanbags underhand towards targets displayed on a wall-mounted touch screen. The distribution of their endpoints was a vertically elongated bivariate Gaussian. In the subsequent choice phase, subjects repeatedly chose which of two targets varying in shape and size they would prefer to attempt to hit. Their choices allowed us to investigate their internal models of visuo-motor error distribution, including the coordinate system in which they represented visuo-motor error. In the transfer phase, subjects repeated the choice phase from a different vantage point, the same distance from the screen but with the throwing direction shifted 45°. From the new vantage point, visuo-motor error was effectively expanded horizontally by . We found that subjects incorrectly assumed an isotropic distribution in the choice phase but that the anisotropy they assumed in the transfer phase agreed with an objectively correct transfer. We also found that the coordinate system used in coding two-dimensional visuo-motor error in the choice phase was effectively one-dimensional. PMID:26057549
Gaussian process regression for sensor networks under localization uncertainty
Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming
2013-01-01
In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Christiansen, Ove
2018-06-01
We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.
Non-linear matter power spectrum covariance matrix errors and cosmological parameter uncertainties
NASA Astrophysics Data System (ADS)
Blot, L.; Corasaniti, P. S.; Amendola, L.; Kitching, T. D.
2016-06-01
The covariance of the matter power spectrum is a key element of the analysis of galaxy clustering data. Independent realizations of observational measurements can be used to sample the covariance, nevertheless statistical sampling errors will propagate into the cosmological parameter inference potentially limiting the capabilities of the upcoming generation of galaxy surveys. The impact of these errors as function of the number of realizations has been previously evaluated for Gaussian distributed data. However, non-linearities in the late-time clustering of matter cause departures from Gaussian statistics. Here, we address the impact of non-Gaussian errors on the sample covariance and precision matrix errors using a large ensemble of N-body simulations. In the range of modes where finite volume effects are negligible (0.1 ≲ k [h Mpc-1] ≲ 1.2), we find deviations of the variance of the sample covariance with respect to Gaussian predictions above ˜10 per cent at k > 0.3 h Mpc-1. Over the entire range these reduce to about ˜5 per cent for the precision matrix. Finally, we perform a Fisher analysis to estimate the effect of covariance errors on the cosmological parameter constraints. In particular, assuming Euclid-like survey characteristics we find that a number of independent realizations larger than 5000 is necessary to reduce the contribution of sampling errors to the cosmological parameter uncertainties at subpercent level. We also show that restricting the analysis to large scales k ≲ 0.2 h Mpc-1 results in a considerable loss in constraining power, while using the linear covariance to include smaller scales leads to an underestimation of the errors on the cosmological parameters.
Erratum: ``A Low-Latitude Halo Stream around the Milky Way'' (ApJ, 588, 824 [2003])
NASA Astrophysics Data System (ADS)
Yanny, Brian; Newberg, Heidi Jo; Grebel, Eva K.; Kent, Steve; Odenkirchen, Michael; Rockosi, Connie M.; Schlegel, David; Subbarao, Mark; Brinkmann, Jon; Fukugita, Masataka; Ivezic, Željko; Lamb, Don Q.; Schneider, Donald P.; York, Donald G.
2004-04-01
The zero points of the stellar templates used to measure radial velocity in the main body of this paper have been found to be systematically in error. Correction of the radial velocities significantly increases the derived circular velocity of the stars in the planar stream, to 215+/-25 km s-1. The velocity dispersion of the stream is somewhat lower than earlier results with the modified analysis. Two types of stars were studied in this paper. The original template for stars of type F, used to study the ``Monoceros arc'' Galactic structure, was incorrectly zero-pointed by 20 km s-1. The original template for stars of type A, used to measure the Sagittarius dwarf tidal stream, produced radial velocities systematically shifted by 49 km s-1. In both cases, the sign of the error is such that for nearly all stars, the correct values of the heliocentric radial velocities are lower than those originally quoted. A cross-correlation of Sloan Digital Sky Survey (SDSS) spectra with templates from the ELODIE survey (C. Soubiran, D. Katz, & R. Cayrel, ApJ, 588, 824 [2003]) was performed to find new radial velocities for each star (D. Schlegel 2003, private communication). This showed that our radial velocities were systematically shifted by an amount that depends on the type of the star observed and the original template against which it was cross-correlated. To determine the measurement error with the new templates, we identified 445 F-type stars and 1109 A-type stars that had been observed twice by the SDSS. These stars were chosen with the color and magnitude criteria used to select stars in Figures 6 and 9. The errors in the F stars were a good match to a Gaussian with a σ of 28 km s-1. The errors in the A star comparison were significantly non-Gaussian, with large tails. A χ2 fit to a Gaussian (similar to the technique we use in this paper to measure the width of the streams) yielded a σ of 35 km s-1. Dividing by sqrt(2) to reflect two independent measurements, we derive a random error of 20 km s-1 for F stars and 25 km s-1 for A stars. The template matching errors in these blue (type A) stars using ELODIE spectral templates are somewhat larger than the errors with our previous analysis, but we found it useful to use ELODIE spectral templates to ensure that the zero points were accurate. We also examined the measured stellar stream dispersions. Electronic versions of Figures 2, 6, and 9 of our paper are presented here with the corrected radial velocity determinations. The data were selected as described in the original paper. Table 1 has been regenerated in its entirety, replacing columns (8) and (10). The radial velocity in column (8) has been replaced with the radial velocity determined from cross-correlation with ELODIE templates. The status flag in column (10) now indicates stars which were used to generate Figure 2. A ``0'' indicates that the star was either outside the color box or had a high cross-correlation error, and a ``1'' indicates that the star was used to fit stream properties. Table 2 has been regenerated using the new results as well. Column (10) has been added to indicate the estimated number of spectra in the stream component. These numbers are used to compute the error in radial velocity, as described in the original paper. Column (11) shows the corrected circulation velocities, which are now consistent with those given in J. D. Crane, S. R. Majewski, H. J. Rocha-Pinto, P. M. Frinchaboy, M. F. Skrutskie, & D. R. Law (ApJ, 588, 824 [2003]). Note that the velocity dispersions of the planar stream are even tighter than originally measured, strengthening the case that the motion is coherent. Note that the mean velocity of the Sagittarius stream in the direction (l,b)=(165deg,-55deg) is -160 km s-1, in line with recent simulations by D. Martinez-Delgado, M. A. Gomez-Flechoso, A. Aparicio, & R. Carrera (2004, ApJ, in press [astro-ph/0308009]). We would like to acknowledge Steve Majewski, who initially pointed out to us that radial velocities for stars he had measured in the halo streams were different from our radial velocities by 20-50 km s-1 (J. D. Crane, S. R. Majewski, H. J. Rocha-Pinto, P. M. Frinchaboy, M. F. Skrutskie, & D. R. Law, ApJ, 588, 824 [2003]). We also acknowledge T. Beers, C. Prieto, and R. Wilhelm for an independent radial velocity analysis, with which we could compare our measured radial velocities.
Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan
2014-01-01
Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.
Dynamic 2D self-phase-map Nyquist ghost correction for simultaneous multi-slice echo planar imaging.
Yarach, Uten; Tung, Yi-Hang; Setsompop, Kawin; In, Myung-Ho; Chatnuntawech, Itthi; Yakupov, Renat; Godenschweger, Frank; Speck, Oliver
2018-02-09
To develop a reconstruction pipeline that intrinsically accounts for both simultaneous multislice echo planar imaging (SMS-EPI) reconstruction and dynamic slice-specific Nyquist ghosting correction in time-series data. After 1D slice-group average phase correction, the separate polarity (i.e., even and odd echoes) SMS-EPI data were unaliased by slice GeneRalized Autocalibrating Partial Parallel Acquisition. Both the slice-unaliased even and odd echoes were jointly reconstructed using a model-based framework, extended for SMS-EPI reconstruction that estimates a 2D self-phase map, corrects dynamic slice-specific phase errors, and combines data from all coils and echoes to obtain the final images. The percentage ghost-to-signal ratios (%GSRs) and its temporal variations for MB3R y 2 with a field of view/4 shift in a human brain obtained by the proposed dynamic 2D and standard 1D phase corrections were 1.37 ± 0.11 and 2.66 ± 0.16, respectively. Even with a large regularization parameter λ applied in the proposed reconstruction, the smoothing effect in fMRI activation maps was comparable to a very small Gaussian kernel size 1 × 1 × 1 mm 3 . The proposed reconstruction pipeline reduced slice-specific phase errors in SMS-EPI, resulting in reduction of GSR. It is applicable for functional MRI studies because the smoothing effect caused by the regularization parameter selection can be minimal in a blood-oxygen-level-dependent activation map. © 2018 International Society for Magnetic Resonance in Medicine.
Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang
2014-01-01
We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474
Relativistic corrections and non-Gaussianity in radio continuum surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maartens, Roy; Zhao, Gong-Bo; Bacon, David
Forthcoming radio continuum surveys will cover large volumes of the observable Universe and will reach to high redshifts, making them potentially powerful probes of dark energy, modified gravity and non-Gaussianity. We consider the continuum surveys with LOFAR, WSRT and ASKAP, and examples of continuum surveys with the SKA. We extend recent work on these surveys by including redshift space distortions and lensing convergence in the radio source auto-correlation. In addition we compute the general relativistic (GR) corrections to the angular power spectrum. These GR corrections to the standard Newtonian analysis of the power spectrum become significant on scales near andmore » beyond the Hubble scale at each redshift. We find that the GR corrections are at most percent-level in LOFAR, WODAN and EMU surveys, but they can produce O(10%) changes for high enough sensitivity SKA continuum surveys. The signal is however dominated by cosmic variance, and multiple-tracer techniques will be needed to overcome this problem. The GR corrections are suppressed in continuum surveys because of the integration over redshift — we expect that GR corrections will be enhanced for future SKA HI surveys in which the source redshifts will be known. We also provide predictions for the angular power spectra in the case where the primordial perturbations have local non-Gaussianity. We find that non-Gaussianity dominates over GR corrections, and rises above cosmic variance when f{sub NL}∼>5 for SKA continuum surveys.« less
Robustness of composite pulse sequences to time-dependent noise
NASA Astrophysics Data System (ADS)
Kabytayev, Chingiz; Green, Todd J.; Khodjasteh, Kaveh; Viola, Lorenza; Biercuk, Michael J.; Brown, Kenneth R.
2014-03-01
Quantum control protocols can minimize the effect of noise sources that reduce the quality of quantum operations. Originally developed for NMR, composite pulse sequences correct for unknown static control errors . We study these compensating pulses in the general case of time-varying Gaussian control noise using a filter-function approach and detailed numerics. Three different noise models were considered in this work: amplitude noise, detuning noise and simultaneous presence of both noises. Pulse sequences are shown to be robust to noise up to frequencies as high as ~10% of the Rabi frequency. Robustness of pulses designed for amplitude noise is explained using a geometric picture that naturally follows from filter function. We also discuss future directions including new pulses correcting for noise of certain frequency. True J. Merrill and Kenneth R. Brown. arXiv:1203.6392v1. In press Adv. Chem. Phys. (2013)
Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.
Li, Yan; Gu, Leon; Kanade, Takeo
2011-09-01
Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.
Linear Space-Variant Image Restoration of Photon-Limited Images
1978-03-01
levels of performance of the wavefront seisor. The parameter ^ represents the residual rms wavefront error ^measurement noise plus ♦ttting error...known to be optimum only when the signal and noise are uncorrelated stationary random processes «nd when the noise statistics are gaussian. In the...regime of photon-Iimited imaging, the noise is non-gaussian and signaI-dependent, and it is therefore reasonable to assume that tome form of linear
Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina
2016-09-01
The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.
NASA Astrophysics Data System (ADS)
Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.
2014-12-01
The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
Feasibility study on the least square method for fitting non-Gaussian noise data
NASA Astrophysics Data System (ADS)
Xu, Wei; Chen, Wen; Liang, Yingjie
2018-02-01
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.
NASA Astrophysics Data System (ADS)
Park, K.-R.; Kim, K.-h.; Kwak, S.; Svensson, J.; Lee, J.; Ghim, Y.-c.
2017-11-01
Feasibility study of direct spectra measurements of Thomson scattered photons for fusion-grade plasmas is performed based on a forward model of the KSTAR Thomson scattering system. Expected spectra in the forward model are calculated based on Selden function including the relativistic polarization correction. Noise in the signal is modeled with photon noise and Gaussian electrical noise. Electron temperature and density are inferred using Bayesian probability theory. Based on bias error, full width at half maximum and entropy of posterior distributions, spectral measurements are found to be feasible. Comparisons between spectrometer-based and polychromator-based Thomson scattering systems are performed with varying quantum efficiency and electrical noise levels.
Modified Gaussian influence function of deformable mirror actuators.
Huang, Linhai; Rao, Changhui; Jiang, Wenhan
2008-01-07
A new deformable mirror influence function based on a Gaussian function is introduced to analyze the fitting capability of a deformable mirror. The modified expressions for both azimuthal and radial directions are presented based on the analysis of the residual error between a measured influence function and a Gaussian influence function. With a simplex search method, we further compare the fitting capability of our proposed influence function to fit the data produced by a Zygo interferometer with that of a Gaussian influence function. The result indicates that the modified Gaussian influence function provides much better performance in data fitting.
Towards information-optimal simulation of partial differential equations.
Leike, Reimar H; Enßlin, Torsten A
2018-03-01
Most simulation schemes for partial differential equations (PDEs) focus on minimizing a simple error norm of a discretized version of a field. This paper takes a fundamentally different approach; the discretized field is interpreted as data providing information about a real physical field that is unknown. This information is sought to be conserved by the scheme as the field evolves in time. Such an information theoretic approach to simulation was pursued before by information field dynamics (IFD). In this paper we work out the theory of IFD for nonlinear PDEs in a noiseless Gaussian approximation. The result is an action that can be minimized to obtain an information-optimal simulation scheme. It can be brought into a closed form using field operators to calculate the appearing Gaussian integrals. The resulting simulation schemes are tested numerically in two instances for the Burgers equation. Their accuracy surpasses finite-difference schemes on the same resolution. The IFD scheme, however, has to be correctly informed on the subgrid correlation structure. In certain limiting cases we recover well-known simulation schemes like spectral Fourier-Galerkin methods. We discuss implications of the approximations made.
New approaches to probing Minkowski functionals
NASA Astrophysics Data System (ADS)
Munshi, D.; Smidt, J.; Cooray, A.; Renzi, A.; Heavens, A.; Coles, P.
2013-10-01
We generalize the concept of the ordinary skew-spectrum to probe the effect of non-Gaussianity on the morphology of cosmic microwave background (CMB) maps in several domains: in real space (where they are commonly known as cumulant-correlators), and in harmonic and needlet bases. The essential aim is to retain more information than normally contained in these statistics, in order to assist in determining the source of any measured non-Gaussianity, in the same spirit as Munshi & Heavens skew-spectra were used to identify foreground contaminants to the CMB bispectrum in Planck data. Using a perturbative series to construct the Minkowski functionals (MFs), we provide a pseudo-C_ℓ based approach in both harmonic and needlet representations to estimate these spectra in the presence of a mask and inhomogeneous noise. Assuming homogeneous noise, we present approximate expressions for error covariance for the purpose of joint estimation of these spectra. We present specific results for four different models of primordial non-Gaussianity local, equilateral, orthogonal and enfolded models, as well as non-Gaussianity caused by unsubtracted point sources. Closed form results of next-order corrections to MFs too are obtained in terms of a quadruplet of kurt-spectra. We also use the method of modal decomposition of the bispectrum and trispectrum to reconstruct the MFs as an alternative method of reconstruction of morphological properties of CMB maps. Finally, we introduce the odd-parity skew-spectra to probe the odd-parity bispectrum and its impact on the morphology of the CMB sky. Although developed for the CMB, the generic results obtained here can be useful in other areas of cosmology.
Synthesis and analysis of discriminators under influence of broadband non-Gaussian noise
NASA Astrophysics Data System (ADS)
Artyushenko, V. M.; Volovach, V. I.
2018-01-01
We considered the problems of the synthesis and analysis of discriminators, when the useful signal is exposed to non-Gaussian additive broadband noise. It is shown that in this case, the discriminator of the tracking meter should contain the nonlinear transformation unit, the characteristics of which are determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian broadband noise and mismatch errors. The parameters of the discriminatory and phase characteristics of the discriminators working under the above conditions are obtained. It is shown that the efficiency of non-linear processing depends on the ratio of power of FM noise to the power of Gaussian noise. The analysis of the information loss of signal transformation caused by the linear section of discriminatory characteristics of the unit of nonlinear transformations of the discriminator is carried out. It is shown that the average slope of the nonlinear transformation characteristic is determined by the Fisher information relative to the probability density function of the mixture of non-Gaussian noise and mismatch errors.
The Nature of the Nodes, Weights and Degree of Precision in Gaussian Quadrature Rules
ERIC Educational Resources Information Center
Prentice, J. S. C.
2011-01-01
We present a comprehensive proof of the theorem that relates the weights and nodes of a Gaussian quadrature rule to its degree of precision. This level of detail is often absent in modern texts on numerical analysis. We show that the degree of precision is maximal, and that the approximation error in Gaussian quadrature is minimal, in a…
Analysis of randomly time varying systems by gaussian closure technique
NASA Astrophysics Data System (ADS)
Dash, P. K.; Iyengar, R. N.
1982-07-01
The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.
Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)
NASA Technical Reports Server (NTRS)
Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.
2006-01-01
Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
NASA Astrophysics Data System (ADS)
Semchishen, A. V.; Semchishen, V. A.
2014-01-01
We studied in vitro the response of the topography of the cornea to its full-area laser ablation (the laser beam spot diameter is commensurable with the size of the interface) outside of the central zone with an excimer laser having a Gaussian fluence distribution across the beam. Subject to investigation were the topographically controlled surface changes of the anterior cornea in 60 porcine eyes with a 5 ± 1.25-diopter artificially induced astigmatism, the changes being caused by laser ablation of the stromal collagen in two 3.5-mm-dia. circular areas along the weaker astigmatism axis. Experimental relationships are presented between the actual astigmatism correction and the expected correction for the intact optical zones 1, 2, 3, and 4 mm in diameter. The data for each zone were approximated by the least-squares method with the function d = a + bx. The coefficient b is given with the root-mean-square error. The statistical processing of the data yielded the following results: d = (0.14 ± 0.037)x for the 1-mm-dia. optical zone, (1.10 ± 0.036)x for the 2-mm-dia. optical zone, (1.04 ± 0.020)x for the 3-mm-dia. optical zone, and (0.55 ± 0.04)x for the 4-mm-dia. optical zone. Full astigmatism correction was achieved with ablation effected outside of the 3-mm-dia. optical zone. The surface changes of the cornea are shown to be due not only to the removal of the corneal tissue, but also to the biomechanical topographic response of the cornea to its strain caused by the formation of a dense pseudomembrane in the ablation area.
Modeling and validation of spectral BRDF on material surface of space target
NASA Astrophysics Data System (ADS)
Hou, Qingyu; Zhi, Xiyang; Zhang, Huili; Zhang, Wei
2014-11-01
The modeling and the validation methods of the spectral BRDF on the material surface of space target were presented. First, the microscopic characteristics of the space targets' material surface were analyzed based on fiber-optic spectrometer using to measure the direction reflectivity of the typical materials surface. To determine the material surface of space target is isotropic, atomic force microscopy was used to measure the material surface structure of space target and obtain Gaussian distribution model of microscopic surface element height. Then, the spectral BRDF model based on that the characteristics of the material surface were isotropic and the surface micro-facet with the Gaussian distribution which we obtained was constructed. The model characterizes smooth and rough surface well for describing the material surface of the space target appropriately. Finally, a spectral BRDF measurement platform in a laboratory was set up, which contains tungsten halogen lamp lighting system, fiber optic spectrometer detection system and measuring mechanical systems with controlling the entire experimental measurement and collecting measurement data by computers automatically. Yellow thermal control material and solar cell were measured with the spectral BRDF, which showed the relationship between the reflection angle and BRDF values at three wavelengths in 380nm, 550nm, 780nm, and the difference between theoretical model values and the measured data was evaluated by relative RMS error. Data analysis shows that the relative RMS error is less than 6%, which verified the correctness of the spectral BRDF model.
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
Pre-correction of distorted Bessel-Gauss beams without wavefront detection
NASA Astrophysics Data System (ADS)
Fu, Shiyao; Wang, Tonglu; Zhang, Zheyuan; Zhai, Yanwang; Gao, Chunqing
2017-12-01
By utilizing the property of the phase's rapid solution of the Gerchberg-Saxton algorithm, we experimentally demonstrate a scheme to correct distorted Bessel-Gauss beams resulting from inhomogeneous media as weak turbulent atmosphere with good performance. A probe Gaussian beam is employed and propagates coaxially with the Bessel-Gauss modes through the turbulence. No wavefront sensor but a matrix detector is used to capture the probe Gaussian beams, and then, the correction phase mask is computed through inputting such probe beam into the Gerchberg-Saxton algorithm. The experimental results indicate that both single and multiplexed BG beams can be corrected well, in terms of the improvement in mode purity and the mitigation of interchannel cross talk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanke, Monika, E-mail: monika@fizyka.umk.pl; Palikot, Ewa, E-mail: epalikot@doktorant.umk.pl; Adamowicz, Ludwik, E-mail: ludwik@email.arizona.edu
2016-05-07
Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H{sub 2} and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.
NASA Astrophysics Data System (ADS)
Sandoz, J.-P.; Steenaart, W.
1984-12-01
The nonuniform sampling digital phase-locked loop (DPLL) with sequential loop filter, in which the correction sizes are controlled by the accumulated differences of two additional phase comparators, is graphically analyzed. In the absence of noise and frequency drift, the analysis gives some physical insight into the acquisition and tracking behavior. Taking noise into account, a mathematical model is derived and a random walk technique is applied to evaluate the rms phase error and the mean acquisition time. Experimental results confirm the appropriate simplifying hypotheses used in the numerical analysis. Two related performance measures defined in terms of the rms phase error and the acquisition time for a given SNR are used. These measures provide a common basis for comparing different digital loops and, to a limited extent, also with a first-order linear loop. Finally, the behavior of a modified DPLL under frequency deviation in the presence of Gaussian noise is tested experimentally and by computer simulation.
Quantitative CT based radiomics as predictor of resectability of pancreatic adenocarcinoma
NASA Astrophysics Data System (ADS)
van der Putten, Joost; Zinger, Svitlana; van der Sommen, Fons; de With, Peter H. N.; Prokop, Mathias; Hermans, John
2018-02-01
In current clinical practice, the resectability of pancreatic ductal adenocarcinoma (PDA) is determined subjec- tively by a physician, which is an error-prone procedure. In this paper, we present a method for automated determination of resectability of PDA from a routine abdominal CT, to reduce such decision errors. The tumor features are extracted from a group of patients with both hypo- and iso-attenuating tumors, of which 29 were resectable and 21 were not. The tumor contours are supplied by a medical expert. We present an approach that uses intensity, shape, and texture features to determine tumor resectability. The best classification results are obtained with fine Gaussian SVM and the L0 Feature Selection algorithms. Compared to expert predictions made on the same dataset, our method achieves better classification results. We obtain significantly better results on correctly predicting non-resectability (+17%) compared to a expert, which is essential for patient treatment (negative prediction value). Moreover, our predictions of resectability exceed expert predictions by approximately 3% (positive prediction value).
Machine learning models for lipophilicity and their domain of applicability.
Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-01-01
Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.
A modified TEW approach to scatter correction for In-111 and Tc-99m dual-isotope small-animal SPECT.
Prior, Paul; Timmins, Rachel; Petryk, Julia; Strydhorst, Jared; Duan, Yin; Wei, Lihui; Glenn Wells, R
2016-10-01
In dual-isotope (Tc-99m/In-111) small-animal single-photon emission computed tomography (SPECT), quantitative accuracy of Tc-99m activity measurements is degraded due to the detection of Compton-scattered photons in the Tc-99m photopeak window, which originate from the In-111 emissions (cross talk) and from the Tc-99m emission (self-scatter). The standard triple-energy window (TEW) estimates the total scatter (self-scatter and cross talk) using one scatter window on either side of the Tc-99m photopeak window, but the estimate is biased due to the presence of unscattered photons in the scatter windows. The authors present a modified TEW method to correct for total scatter that compensates for this bias and evaluate the method in phantoms and in vivo. The number of unscattered Tc-99m and In-111 photons present in each scatter-window projection is estimated based on the number of photons detected in the photopeak of each isotope, using the isotope-dependent energy resolution of the detector. The camera-head-specific energy resolutions for the 140 keV Tc-99m and 171 keV In-111 emissions were determined experimentally by separately sampling the energy spectra of each isotope. Each sampled spectrum was fit with a Linear + Gaussian function. The fitted Gaussian functions were integrated across each energy window to determine the proportion of unscattered photons from each emission detected in the scatter windows. The method was first tested and compared to the standard TEW in phantoms containing Tc-99m:In-111 activity ratios between 0.15 and 6.90. True activities were determined using a dose calibrator, and SPECT activities were estimated from CT-attenuation-corrected images with and without scatter-correction. The method was then tested in vivo in six rats using In-111-liposome and Tc-99m-tetrofosmin to generate cross talk in the area of the myocardium. The myocardium was manually segmented using the SPECT and CT images, and partial-volume correction was performed using a template-based approach. The rat heart was counted in a well-counter to determine the true activity. In the phantoms without correction for Compton-scatter, Tc-99m activity quantification errors as high as 85% were observed. The standard TEW method quantified Tc-99m activity with an average accuracy of -9.0% ± 0.7%, while the modified TEW was accurate within 5% of truth in phantoms with Tc-99m:In-111 activity ratios ≥0.52. Without scatter-correction, In-111 activity was quantified with an average accuracy of 4.1%, and there was no dependence of accuracy on the activity ratio. In rat myocardia, uncorrected images were overestimated by an average of 23% ± 5%, and the standard TEW had an accuracy of -13.8% ± 1.6%, while the modified TEW yielded an accuracy of -4.0% ± 1.6%. Cross talk and self-scatter were shown to produce quantification errors in phantoms as well as in vivo. The standard TEW provided inaccurate results due to the inclusion of unscattered photons in the scatter windows. The modified TEW improved the scatter estimate and reduced the quantification errors in phantoms and in vivo.
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
Krylov Deferred Correction Accelerated Method of Lines Transpose for Parabolic Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Jun; Jingfang, Huang
2008-01-01
In this paper, a new class of numerical methods for the accurate and efficient solutions of parabolic partial differential equations is presented. Unlike traditional method of lines (MoL), the new {\\bf \\it Krylov deferred correction (KDC) accelerated method of lines transpose (MoL^T)} first discretizes the temporal direction using Gaussian type nodes and spectral integration, and symbolically applies low-order time marching schemes to form a preconditioned elliptic system, which is then solved iteratively using Newton-Krylov techniques such as Newton-GMRES or Newton-BiCGStab method. Each function evaluation in the Newton-Krylov method is simply one low-order time-stepping approximation of the error by solving amore » decoupled system using available fast elliptic equation solvers. Preliminary numerical experiments show that the KDC accelerated MoL^T technique is unconditionally stable, can be spectrally accurate in both temporal and spatial directions, and allows optimal time-step sizes in long-time simulations.« less
Peak-locking centroid bias in Shack-Hartmann wavefront sensing
NASA Astrophysics Data System (ADS)
Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.
2018-05-01
Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
Measurement of Hubble constant: non-Gaussian errors in HST Key Project data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Meghendra; Gupta, Shashikant; Pandey, Ashwini
2016-08-01
Assuming the Central Limit Theorem, experimental uncertainties in any data set are expected to follow the Gaussian distribution with zero mean. We propose an elegant method based on Kolmogorov-Smirnov statistic to test the above; and apply it on the measurement of Hubble constant which determines the expansion rate of the Universe. The measurements were made using Hubble Space Telescope. Our analysis shows that the uncertainties in the above measurement are non-Gaussian.
NASA Technical Reports Server (NTRS)
Mahesh, Ashwin; Spinhirne, James D.; Duda, David P.; Eloranta, Edwin W.; Starr, David O'C (Technical Monitor)
2001-01-01
The altimetry bias in GLAS (Geoscience Laser Altimeter System) or other laser altimeters resulting from atmospheric multiple scattering is studied in relationship to current knowledge of cloud properties over the Antarctic Plateau. Estimates of seasonal and interannual changes in the bias are presented. Results show the bias in altitude from multiple scattering in clouds would be a significant error source without correction. The selective use of low optical depth clouds or cloudfree observations, as well as improved analysis of the return pulse such as by the Gaussian method used here, are necessary to minimize the surface altitude errors. The magnitude of the bias is affected by variations in cloud height, cloud effective particle size and optical depth. Interannual variations in these properties as well as in cloud cover fraction could lead to significant year-to-year variations in the altitude bias. Although cloud-free observations reduce biases in surface elevation measurements from space, over Antarctica these may often include near-surface blowing snow, also a source of scattering-induced delay. With careful selection and analysis of data, laser altimetry specifications can be met.
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
Orbit-product representation and correction of Gaussian belief propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K; Chertkov, Michael; Chernyak, Vladimir
We present a new interpretation of Gaussian belief propagation (GaBP) based on the 'zeta function' representation of the determinant as a product over orbits of a graph. We show that GaBP captures back-tracking orbits of the graph and consider how to correct this estimate by accounting for non-backtracking orbits. We show that the product over non-backtracking orbits may be interpreted as the determinant of the non-backtracking adjacency matrix of the graph with edge weights based on the solution of GaBP. An efficient method is proposed to compute a truncated correction factor including all non-backtracking orbits up to a specified length.
NASA Astrophysics Data System (ADS)
Fletcher, S. J.; Kleist, D.; Ide, K.
2017-12-01
As the resolution of operational global numerical weather prediction system approach the meso-scale, then the assumption of Gaussianity for the errors at these scales may not valid. However, it is also true that synoptic variables that are positive definite in behavior, for example humidity, cannot be optimally analyzed with a Gaussian error structure, where the increment could force the full field to go negative. In this presentation we present the initial work of implementing a mixed Gaussian-lognormal approximation for the temperature and moisture variable in both the ensemble and variational component of the NCEP GSI hybrid EnVAR. We shall also lay the foundation for the implementation of the lognormal approximation to cloud related control variables to allow for a possible more consistent assimilation of cloudy radiances.
NASA Astrophysics Data System (ADS)
Havelund, R.; Seah, M. P.; Tiddia, M.; Gilmore, I. S.
2018-02-01
A procedure has been established to define the interface position in depth profiles accurately when using secondary ion mass spectrometry and the negative secondary ions. The interface position varies strongly with the extent of the matrix effect and so depends on the secondary ion measured. Intensity profiles have been measured at both fluorenylmethyloxycarbonyl-uc(l)-pentafluorophenylalanine (FMOC) to Irganox 1010 and Irganox 1010 to FMOC interfaces for many secondary ions. These profiles show separations of the two interfaces that vary over some 10 nm depending on the secondary ion selected. The shapes of these profiles are strongly governed by matrix effects, slightly weakened by a long wavelength roughening. The matrix effects are separately measured using homogeneous, known mixtures of these two materials. Removal of the matrix and roughening effects give consistent compositional profiles for all ions that are described by an integrated exponentially modified Gaussian (EMG) profile. Use of a simple integrated Gaussian may lead to significant errors. The average interface positions in the compositional profiles are determined to standard uncertainties of 0.19 and 0.14 nm, respectively, using the integrated EMG function. Alternatively, and more simply, it is shown that interface positions and profiles may be deduced from data for several secondary ions with measured matrix factors by simply extrapolating the result to Ξ = 0. Care must be taken in quoting interface resolutions since those measured for predominantly Gaussian interfaces with Ξ above or below zero, without correction, appear significantly better than the true resolution.
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost
Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor
2009-01-01
Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321
Asteroid orbital error analysis: Theory and application
NASA Technical Reports Server (NTRS)
Muinonen, K.; Bowell, Edward
1992-01-01
We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Multilevel geometry optimization
NASA Astrophysics Data System (ADS)
Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.
2000-02-01
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.
Wigner distribution function of Hermite-cosine-Gaussian beams through an apertured optical system.
Sun, Dong; Zhao, Daomu
2005-08-01
By introducing the hard-aperture function into a finite sum of complex Gaussian functions, the approximate analytical expressions of the Wigner distribution function for Hermite-cosine-Gaussian beams passing through an apertured paraxial ABCD optical system are obtained. The analytical results are compared with the numerically integrated ones, and the absolute errors are also given. It is shown that the analytical results are proper and that the calculation speed for them is much faster than for the numerical results.
Predicting Error Bars for QSAR Models
NASA Astrophysics Data System (ADS)
Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-09-01
Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
Gu, Shoou-Lian Hwang; Gau, Susan Shur-Fen; Tzang, Shyh-Weir; Hsu, Wen-Yau
2013-11-01
We investigated the three parameters (mu, sigma, tau) of ex-Gaussian distribution of RT derived from the Conners' continuous performance test (CCPT) and examined the moderating effects of the energetic factors (the inter-stimulus intervals (ISIs) and Blocks) among these three parameters, especially tau, an index describing the positive skew of RT distribution. We assessed 195 adolescents with DSM-IV ADHD, and 90 typically developing (TD) adolescents, aged 10-16. Participants and their parents received psychiatric interviews to confirm the diagnosis of ADHD and other psychiatric disorders. Participants also received intelligence (WISC-III) and CCPT assessments. We found that participants with ADHD had a smaller mu, and larger tau. As the ISI/Block increased, the magnitude of group difference in tau increased. Among the three ex-Gaussian parameters, tau was positively associated with omission errors, and mu was negatively associated with commission errors. The moderating effects of ISIs and Blocks on tau parameters suggested that the ex-Gaussian parameters could offer more information about the attention state in vigilance task, especially in ADHD. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.
2016-12-01
Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.
Ozone Profile Retrievals from the OMPS on Suomi NPP
NASA Astrophysics Data System (ADS)
Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.
2017-12-01
We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umeh, Obinna; Jolicoeur, Sheean; Maartens, Roy
Next-generation galaxy surveys will increasingly rely on the galaxy bispectrum to improve cosmological constraints, especially on primordial non-Gaussianity. A key theoretical requirement that remains to be developed is the analysis of general relativistic effects on the bispectrum, which arise from observing galaxies on the past lightcone, as well as from relativistic corrections to the dynamics. As an initial step towards a fully relativistic analysis of the galaxy bispectrum, we compute for the first time the local relativistic lightcone effects on the bispectrum, which come from Doppler and gravitational potential contributions. For the galaxy bispectrum, the problem is much more complexmore » than for the power spectrum, since we need the lightcone corrections at second order. Mode-coupling contributions at second order mean that relativistic corrections can be non-negligible at smaller scales than in the case of the power spectrum. In a primordial Gaussian universe, we show that the local lightcone projection effects for squeezed shapes at z ∼ 1 mean that the bispectrum can differ from the Newtonian prediction by ∼> 10% when the short modes are k ∼< (50 Mpc){sup −1}. These relativistic projection effects, if ignored in the analysis of observations, could be mistaken for primordial non-Gaussianity. For upcoming surveys which probe equality scales and beyond, all relativistic lightcone effects and relativistic dynamical corrections should be included for an accurate measurement of primordial non-Gaussianity.« less
Are extreme events (statistically) special? (Invited)
NASA Astrophysics Data System (ADS)
Main, I. G.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A. F.; McCloskey, J.
2009-12-01
We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic’, do they ‘know’ how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic’-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball’ fits to unconsciously (but wrongly in this case) assume Gaussian errors. We develop methods to correct for these effects, and show that the current best fit maximum likelihood regression model for the global frequency-moment distribution in the digital era is a power law, i.e. mega-earthquakes continue to follow the Gutenberg-Richter trend of smaller earthquakes with no (as yet) observable cut-off or characteristic extreme event. The results may also have implications for the interpretation of other time-limited geophysical time series that exhibit power-law scaling.
Testing for scale-invariance in extreme events, with application to earthquake occurrence
NASA Astrophysics Data System (ADS)
Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.
2009-04-01
We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in this case) to assume Gaussian errors. We develop methods to correct for these effects, and show that the current best fit maximum likelihood regression model for the global frequency-moment distribution in the digital era is a power law, i.e. mega-earthquakes continue to follow the Gutenberg-Richter trend of smaller earthquakes with no (as yet) observable cut-off or characteristic extreme event. The results may also have implications for the interpretation of other time-limited geophysical time series that exhibit power-law scaling.
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Han, Jianguang; Wang, Yun; Yu, Changqing; Chen, Peng
2017-02-01
An approach for extracting angle-domain common-image gathers (ADCIGs) from anisotropic Gaussian beam prestack depth migration (GB-PSDM) is presented in this paper. The propagation angle is calculated in the process of migration using the real-value traveltime information of Gaussian beam. Based on the above, we further investigate the effects of anisotropy on GB-PSDM, where the corresponding ADCIGs are extracted to assess the quality of migration images. The test results of the VTI syncline model and the TTI thrust sheet model show that anisotropic parameters ɛ, δ, and tilt angle 𝜃, have a great influence on the accuracy of the migrated image in anisotropic media, and ignoring any one of them will cause obvious imaging errors. The anisotropic GB-PSDM with the true anisotropic parameters can obtain more accurate seismic images of subsurface structures in anisotropic media.
Density implications of shift compensation postprocessing in holographic storage systems
NASA Astrophysics Data System (ADS)
Menetrier, Laure; Burr, Geoffrey W.
2003-02-01
We investigate the effect of data page misregistration, and its subsequent correction in postprocessing, on the storage density of holographic data storage systems. A numerical simulation is used to obtain the bit-error rate as a function of hologram aperture, page misregistration, pixel fill factors, and Gaussian additive intensity noise. Postprocessing of simulated data pages is performed by a nonlinear pixel shift compensation algorithm [Opt. Lett. 26, 542 (2001)]. The performance of this algorithm is analyzed in the presence of noise by determining the achievable areal density. The impact of inaccurate measurements of page misregistration is also investigated. Results show that the shift-compensation algorithm can provide almost complete immunity to page misregistration, although at some penalty to the baseline areal density offered by a system with zero tolerance to misalignment.
A Practical Model of Quartz Crystal Microbalance in Actual Applications.
Huang, Xianhe; Bai, Qingsong; Hu, Jianguo; Hou, Dong
2017-08-03
A practical model of quartz crystal microbalance (QCM) is presented, which considers both the Gaussian distribution characteristic of mass sensitivity and the influence of electrodes on the mass sensitivity. The equivalent mass sensitivity of 5 MHz and 10 MHz AT-cut QCMs with different sized electrodes were calculated according to this practical model. The equivalent mass sensitivity of this practical model is different from the Sauerbrey's mass sensitivity, and the error between them increases sharply as the electrode radius decreases. A series of experiments which plate rigid gold film onto QCMs were carried out and the experimental results proved this practical model is more valid and correct rather than the classical Sauerbrey equation. The practical model based on the equivalent mass sensitivity is convenient and accurate in actual measurements.
NASA Technical Reports Server (NTRS)
Reimers, J. R.; Heller, E. J.
1985-01-01
The exact thermal rotational spectrum of a two-dimensional rigid rotor is obtained using Gaussian wave packet dynamics. The spectrum is obtained by propagating, without approximation, infinite sets of Gaussian wave packets. These sets are constructed so that collectively they have the correct periodicity, and indeed, are coherent states appropriate to this problem. Also, simple, almost classical, approximations to full wave packet dynamics are shown to give results which are either exact or very nearly exact. Advantages of the use of Gaussian wave packet dynamics over conventional linear response theory are discussed.
NASA Technical Reports Server (NTRS)
Kogut, A.; Banday, A. J.; Bennett, C. L.; Hinshaw, G.; Lubin, P. M.; Smoot, G. F.
1995-01-01
We use the two-point correlation function of the extrema points (peaks and valleys) in the Cosmic Background Explorer (COBE) Differential Microwave Radiometers (DMR) 2 year sky maps as a test for non-Gaussian temperature distribution in the cosmic microwave background anisotropy. A maximum-likelihood analysis compares the DMR data to n = 1 toy models whose random-phase spherical harmonic components a(sub lm) are drawn from either Gaussian, chi-square, or log-normal parent populations. The likelihood of the 53 GHz (A+B)/2 data is greatest for the exact Gaussian model. There is less than 10% chance that the non-Gaussian models tested describe the DMR data, limited primarily by type II errors in the statistical inference. The extrema correlation function is a stronger test for this class of non-Gaussian models than topological statistics such as the genus.
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.
Non-Gaussianity and Excursion Set Theory: Halo Bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adshead, Peter; Baxter, Eric J.; Dodelson, Scott
2012-09-01
We study the impact of primordial non-Gaussianity generated during inflation on the bias of halos using excursion set theory. We recapture the familiar result that the bias scales asmore » $$k^{-2}$$ on large scales for local type non-Gaussianity but explicitly identify the approximations that go into this conclusion and the corrections to it. We solve the more complicated problem of non-spherical halos, for which the collapse threshold is scale dependent.« less
Caravaca-Arens, Esteban; de Fez, Dolores; Blanes-Mompó, Francisco J.
2017-01-01
Purpose To analyze the errors associated to corneal power calculation using the keratometric approach in keratoconus eyes after accelerated corneal collagen crosslinking (CXL) surgery and to obtain a model for the estimation of an adjusted corneal refractive index (nkadj) minimizing such errors. Methods Potential differences (ΔPc) among keratometric (Pk) and Gaussian corneal power (PcGauss) were simulated. Three algorithms based on the use of nkadj for the estimation of an adjusted keratometric corneal power (Pkadj) were developed. The agreement between Pk(1.3375) (keratometric power using the keratometric index of 1.3375), PcGauss, and Pkadj was evaluated. The validity of the algorithm developed was investigated in 21 keratoconus eyes undergoing accelerated CXL. Results P k(1.3375) overestimated corneal power between 0.3 and 3.2 D in theoretical simulations and between 0.8 and 2.9 D in the clinical study (ΔPc). Three linear equations were defined for nkadj to be used for different ranges of r1c. In the clinical study, differences between Pkadj and PcGauss did not exceed ±0.8 D nk = 1.3375. No statistically significant differences were found between Pkadj and PcGauss (p > 0.05) and Pk(1.3375) and Pkadj (p < 0.001). Conclusions The use of the keratometric approach in keratoconus eyes after accelerated CXL can lead to significant clinical errors. These errors can be minimized with an adjusted keratometric approach. PMID:29201459
Antunes, Sofia; Esposito, Antonio; Palmisano, Anna; Colantoni, Caterina; Cerutti, Sergio; Rizzo, Giovanna
2016-05-01
Extraction of the cardiac surfaces of interest from multi-detector computed tomographic (MDCT) data is a pre-requisite step for cardiac analysis, as well as for image guidance procedures. Most of the existing methods need manual corrections, which is time-consuming. We present a fully automatic segmentation technique for the extraction of the right ventricle, left ventricular endocardium and epicardium from MDCT images. The method consists in a 3D level set surface evolution approach coupled to a new stopping function based on a multiscale directional second derivative Gaussian filter, which is able to stop propagation precisely on the real boundary of the structures of interest. We validated the segmentation method on 18 MDCT volumes from healthy and pathologic subjects using manual segmentation performed by a team of expert radiologists as gold standard. Segmentation errors were assessed for each structure resulting in a surface-to-surface mean error below 0.5 mm and a percentage of surface distance with errors less than 1 mm above 80%. Moreover, in comparison to other segmentation approaches, already proposed in previous work, our method presented an improved accuracy (with surface distance errors less than 1 mm increased of 8-20% for all structures). The obtained results suggest that our approach is accurate and effective for the segmentation of ventricular cavities and myocardium from MDCT images.
Capacity and optimal collusion attack channels for Gaussian fingerprinting games
NASA Astrophysics Data System (ADS)
Wang, Ying; Moulin, Pierre
2007-02-01
In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above constraints. Under those constraints on the fingerprint embedder and the colluders, fingerprinting capacity is obtained as the solution of a mutual-information game involving probability density functions (pdf's) designed by the embedder and the colluders. We show that the optimal fingerprinting strategy is a Gaussian test channel where the fingerprinted signal is the sum of an attenuated version of the cover signal plus a Gaussian information-bearing noise, and the optimal collusion strategy is to average fingerprinted signals possessed by all the colluders and pass the averaged copy through a Gaussian test channel. The capacity result and the optimal strategies are the same for both the private and public games. In the former scenario, the original covertext is available to the decoder, while in the latter setup, the original covertext is available to the encoder but not to the decoder.
Predicting Error Bars for QSAR Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeter, Timon; Technische Universitaet Berlin, Department of Computer Science, Franklinstrasse 28/29, 10587 Berlin; Schwaighofer, Anton
2007-09-18
Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D{sub 7} models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniquesmore » for the other modelling approaches.« less
Plechawska, Małgorzata; Polańska, Joanna
2009-01-01
This article presents the method of the processing of mass spectrometry data. Mass spectra are modelled with Gaussian Mixture Models. Every peak of the spectrum is represented by a single Gaussian. Its parameters describe the location, height and width of the corresponding peak of the spectrum. An authorial version of the Expectation Maximisation Algorithm was used to perform all calculations. Errors were estimated with a virtual mass spectrometer. The discussed tool was originally designed to generate a set of spectra within defined parameters.
A novel QC-LDPC code based on the finite field multiplicative group for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xu, Liang; Tong, Qing-zhen
2013-09-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) code is proposed based on the finite field multiplicative group, which has easier construction, more flexible code-length code-rate adjustment and lower encoding/decoding complexity. Moreover, a regular QC-LDPC(5334,4962) code is constructed. The simulation results show that the constructed QC-LDPC(5334,4962) code can gain better error correction performance under the condition of the additive white Gaussian noise (AWGN) channel with iterative decoding sum-product algorithm (SPA). At the bit error rate (BER) of 10-6, the net coding gain (NCG) of the constructed QC-LDPC(5334,4962) code is 1.8 dB, 0.9 dB and 0.2 dB more than that of the classic RS(255,239) code in ITU-T G.975, the LDPC(32640,30592) code in ITU-T G.975.1 and the SCG-LDPC(3969,3720) code constructed by the random method, respectively. So it is more suitable for optical communication systems.
Recurrent Neural Network Applications for Astronomical Time Series
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2017-06-01
The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.
Svendsen, M B S; Bushnell, P G; Christensen, E A F; Steffensen, J F
2016-01-01
As intermittent-flow respirometry has become a common method for the determination of resting metabolism or standard metabolic rate (SMR), this study investigated how much of the variability seen in the experiments was due to measurement error. Experiments simulated different constant oxygen consumption rates (M˙O2 ) of a fish, by continuously injecting anoxic water into a respirometer, altering the injection rate to correct for the washout error. The effect of respirometer-to-fish volume ratio (RFR) on SMR measurement and variability was also investigated, using the simulated constant M˙O2 and the M˙O2 of seven roach Rutilus rutilus in respirometers of two different sizes. The results show that higher RFR increases measurement variability but does not change the mean SMR established using a double Gaussian fit. Further, the study demonstrates that the variation observed when determining oxygen consumption rates of fishes in systems with reasonable RFRs mainly comes from the animal, not from the measuring equipment. © 2016 The Fisheries Society of the British Isles.
Performance Bounds on Two Concatenated, Interleaved Codes
NASA Technical Reports Server (NTRS)
Moision, Bruce; Dolinar, Samuel
2010-01-01
A method has been developed of computing bounds on the performance of a code comprised of two linear binary codes generated by two encoders serially concatenated through an interleaver. Originally intended for use in evaluating the performances of some codes proposed for deep-space communication links, the method can also be used in evaluating the performances of short-block-length codes in other applications. The method applies, more specifically, to a communication system in which following processes take place: At the transmitter, the original binary information that one seeks to transmit is first processed by an encoder into an outer code (Co) characterized by, among other things, a pair of numbers (n,k), where n (n > k)is the total number of code bits associated with k information bits and n k bits are used for correcting or at least detecting errors. Next, the outer code is processed through either a block or a convolutional interleaver. In the block interleaver, the words of the outer code are processed in blocks of I words. In the convolutional interleaver, the interleaving operation is performed bit-wise in N rows with delays that are multiples of B bits. The output of the interleaver is processed through a second encoder to obtain an inner code (Ci) characterized by (ni,ki). The output of the inner code is transmitted over an additive-white-Gaussian- noise channel characterized by a symbol signal-to-noise ratio (SNR) Es/No and a bit SNR Eb/No. At the receiver, an inner decoder generates estimates of bits. Depending on whether a block or a convolutional interleaver is used at the transmitter, the sequence of estimated bits is processed through a block or a convolutional de-interleaver, respectively, to obtain estimates of code words. Then the estimates of the code words are processed through an outer decoder, which generates estimates of the original information along with flags indicating which estimates are presumed to be correct and which are found to be erroneous. From the perspective of the present method, the topic of major interest is the performance of the communication system as quantified in the word-error rate and the undetected-error rate as functions of the SNRs and the total latency of the interleaver and inner code. The method is embodied in equations that describe bounds on these functions. Throughout the derivation of the equations that embody the method, it is assumed that the decoder for the outer code corrects any error pattern of t or fewer errors, detects any error pattern of s or fewer errors, may detect some error patterns of more than s errors, and does not correct any patterns of more than t errors. Because a mathematically complete description of the equations that embody the method and of the derivation of the equations would greatly exceed the space available for this article, it must suffice to summarize by reporting that the derivation includes consideration of several complex issues, including relationships between latency and memory requirements for block and convolutional codes, burst error statistics, enumeration of error-event intersections, and effects of different interleaving depths. In a demonstration, the method was used to calculate bounds on the performances of several communication systems, each based on serial concatenation of a (63,56) expurgated Hamming code with a convolutional inner code through a convolutional interleaver. The bounds calculated by use of the method were compared with results of numerical simulations of performances of the systems to show the regions where the bounds are tight (see figure).
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits
NASA Astrophysics Data System (ADS)
Hoogland, Jiri; Kleiss, Ronald
1997-04-01
In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Experimental generation of Laguerre-Gaussian beam using digital micromirror device.
Ren, Yu-Xuan; Li, Ming; Huang, Kun; Wu, Jian-Guang; Gao, Hong-Fang; Wang, Zi-Qiang; Li, Yin-Mei
2010-04-01
A digital micromirror device (DMD) modulates laser intensity through computer control of the device. We experimentally investigate the performance of the modulation property of a DMD and optimize the modulation procedure through image correction. Furthermore, Laguerre-Gaussian (LG) beams with different topological charges are generated by projecting a series of forklike gratings onto the DMD. We measure the field distribution with and without correction, the energy of LG beams with different topological charges, and the polarization property in sequence. Experimental results demonstrate that it is possible to generate LG beams with a DMD that allows the use of a high-intensity laser with proper correction to the input images, and that the polarization state of the LG beam differs from that of the input beam.
Nernst effect from fluctuating pairs in the pseudogap phase of the cuprates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levchenko, A.; Norman, M. R.; Varlamov, A. A.
2011-01-31
The observation of a large Nernst signal in cuprates above the superconducting transition temperature has attracted much attention. A potential explanation is that it originates from superconducting fluctuations. Although the Nernst signal is indeed consistent with Gaussian fluctuations for overdoped cuprates, Gaussian theory fails to describe the temperature dependence seen for underdoped cuprates. Here, we consider the vertex correction to Gaussian theory resulting from the pseudogap. This yields a Nernst signal in good agreement with the data.
Elegant Ince—Gaussian breathers in strongly nonlocal nonlinear media
NASA Astrophysics Data System (ADS)
Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi
2012-06-01
A novel class of optical breathers, called elegant Ince—Gaussian breathers, are presented in this paper. They are exact analytical solutions to Snyder and Mitchell's mode in an elliptic coordinate system, and their transverse structures are described by Ince-polynomials with complex arguments and a Gaussian function. We provide convincing evidence for the correctness of the solutions and the existence of the breathers via comparing the analytical solutions with numerical simulation of the nonlocal nonlinear Schrödinger equation.
Correction Factor for Gaussian Deconvolution of Optically Thick Linewidths in Homogeneous Sources
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1999-01-01
Profiles of optically thick, non-Gaussian emission line profiles convoluted with Gaussian instrumental profiles are constructed, and are deconvoluted on the usual Gaussian basis to examine the departure from accuracy thereby caused in "measured" linewidths. It is found that "measured" linewidths underestimate the true linewidths of optically thick lines, by a factor which depends on the resolution factor r congruent to Doppler width/instrumental width and on the optical thickness tau(sub 0). An approximating expression is obtained for this factor, applicable in the range of at least 0 <= tau(sub 0) <= 10, which can provide estimates of the true linewidth and optical thickness.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
Fixing convergence of Gaussian belief propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K; Bickson, Danny; Dolev, Danny
Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm ismore » linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanari's linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.« less
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Beyond hypercorrection: remembering corrective feedback for low-confidence errors.
Griffiths, Lauren; Higham, Philip A
2018-02-01
Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filipuzzi, M; Garrigo, E; Venencia, C
2014-06-01
Purpose: To calculate the spatial response function of various radiation detectors, to evaluate the dependence on the field size and to analyze the small fields profiles corrections by deconvolution techniques. Methods: Crossline profiles were measured on a Novalis Tx 6MV beam with a HDMLC. The configuration setup was SSD=100cm and depth=5cm. Five fields were studied (200×200mm2,100×100mm2, 20×20mm2, 10×10mm2and 5×5mm2) and measured were made with passive detectors (EBT3 radiochromic films and TLD700 thermoluminescent detectors), ionization chambers (PTW30013, PTW31003, CC04 and PTW31016) and diodes (PTW60012 and IBA SFD). The results of passive detectors were adopted as the actual beam profile. To calculatemore » the detectors kernels, modeled by Gaussian functions, an iterative process based on a least squares criterion was used. The deconvolutions of the measured profiles were calculated with the Richardson-Lucy method. Results: The profiles of the passive detectors corresponded with a difference in the penumbra less than 0.1mm. Both diodes resolve the profiles with an overestimation of the penumbra smaller than 0.2mm. For the other detectors, response functions were calculated and resulted in Gaussian functions with a standard deviation approximate to the radius of the detector in study (with a variation less than 3%). The corrected profiles resolve the penumbra with less than 1% error. Major discrepancies were observed for cases in extreme conditions (PTW31003 and 5×5mm2 field size). Conclusion: This work concludes that the response function of a radiation detector is independent on the field size, even for small radiation beams. The profiles correction, using deconvolution techniques and response functions of standard deviation equal to the radius of the detector, gives penumbra values with less than 1% difference to the real profile. The implementation of this technique allows estimating the real profile, freeing from the effects of the detector used for the acquisition.« less
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
Analysis of quantum error correction with symmetric hypergraph states
NASA Astrophysics Data System (ADS)
Wagner, T.; Kampermann, H.; Bruß, D.
2018-03-01
Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.
Circular Probable Error for Circular and Noncircular Gaussian Impacts
2012-09-01
1M simulated impacts ph(k)=mean(imp(:,1).^2+imp(:,2).^2<=CEP^2); % hit frequency on CEP end phit (j)=mean(ph...avg 100 hit frequencies to “incr n” end % GRAPHICS plot(i, phit ,’r-’); % error exponent versus Ph estimate
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Trinh, Allan K.
2018-05-01
The neighbourhood of the largest eigenvalue λmax in the Gaussian unitary ensemble (GUE) and Laguerre unitary ensemble (LUE) is referred to as the soft edge. It is known that there exists a particular centring and scaling such that the distribution of λmax tends to a universal form, with an error term bounded by 1/N2/3. We take up the problem of computing the exact functional form of the leading error term in a large N asymptotic expansion for both the GUE and LUE—two versions of the LUE are considered, one with the parameter a fixed and the other with a proportional to N. Both settings in the LUE case allow for an interpretation in terms of the distribution of a particular weighted path length in a model involving exponential variables on a rectangular grid, as the grid size gets large. We give operator theoretic forms of the corrections, which are corollaries of knowledge of the first two terms in the large N expansion of the scaled kernel and are readily computed using a method due to Bornemann. We also give expressions in terms of the solutions of particular systems of coupled differential equations, which provide an alternative method of computation. Both characterisations are well suited to a thinned generalisation of the original ensemble, whereby each eigenvalue is deleted independently with probability (1 - ξ). In Sec. V, we investigate using simulation the question of whether upon an appropriate centring and scaling a wider class of complex Hermitian random matrix ensembles have their leading correction to the distribution of λmax proportional to 1/N2/3.
B97-3c: A revised low-cost variant of the B97-D density functional method
NASA Astrophysics Data System (ADS)
Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas; Grimme, Stefan
2018-02-01
A revised version of the well-established B97-D density functional approximation with general applicability for chemical properties of large systems is proposed. Like B97-D, it is based on Becke's power-series ansatz from 1997 and is explicitly parametrized by including the standard D3 semi-classical dispersion correction. The orbitals are expanded in a modified valence triple-zeta Gaussian basis set, which is available for all elements up to Rn. Remaining basis set errors are mostly absorbed in the modified B97 parametrization, while an established atom-pairwise short-range potential is applied to correct for the systematically too long bonds of main group elements which are typical for most semi-local density functionals. The new composite scheme (termed B97-3c) completes the hierarchy of "low-cost" electronic structure methods, which are all mainly free of basis set superposition error and account for most interactions in a physically sound and asymptotically correct manner. B97-3c yields excellent molecular and condensed phase geometries, similar to most hybrid functionals evaluated in a larger basis set expansion. Results on the comprehensive GMTKN55 energy database demonstrate its good performance for main group thermochemistry, kinetics, and non-covalent interactions, when compared to functionals of the same class. This also transfers to metal-organic reactions, which is a major area of applicability for semi-local functionals. B97-3c can be routinely applied to hundreds of atoms on a single processor and we suggest it as a robust computational tool, in particular, for more strongly correlated systems where our previously published "3c" schemes might be problematic.
Preserving the Boltzmann ensemble in replica-exchange molecular dynamics.
Cooke, Ben; Schmidler, Scott C
2008-10-28
We consider the convergence behavior of replica-exchange molecular dynamics (REMD) [Sugita and Okamoto, Chem. Phys. Lett. 314, 141 (1999)] based on properties of the numerical integrators in the underlying isothermal molecular dynamics (MD) simulations. We show that a variety of deterministic algorithms favored by molecular dynamics practitioners for constant-temperature simulation of biomolecules fail either to be measure invariant or irreducible, and are therefore not ergodic. We then show that REMD using these algorithms also fails to be ergodic. As a result, the entire configuration space may not be explored even in an infinitely long simulation, and the simulation may not converge to the desired equilibrium Boltzmann ensemble. Moreover, our analysis shows that for initial configurations with unfavorable energy, it may be impossible for the system to reach a region surrounding the minimum energy configuration. We demonstrate these failures of REMD algorithms for three small systems: a Gaussian distribution (simple harmonic oscillator dynamics), a bimodal mixture of Gaussians distribution, and the alanine dipeptide. Examination of the resulting phase plots and equilibrium configuration densities indicates significant errors in the ensemble generated by REMD simulation. We describe a simple modification to address these failures based on a stochastic hybrid Monte Carlo correction, and prove that this is ergodic.
Yamauchi, Kazuto; Yamamura, Kazuya; Mimura, Hidekazu; Sano, Yasuhisa; Saito, Akira; Endo, Katsuyoshi; Souvorov, Alexei; Yabashi, Makina; Tamasaku, Kenji; Ishikawa, Tetsuya; Mori, Yuzo
2005-11-10
The intensity flatness and wavefront shape in a coherent hard-x-ray beam totally reflected by flat mirrors that have surface bumps modeled by Gaussian functions were investigated by use of a wave-optical simulation code. Simulated results revealed the necessity for peak-to-valley height accuracy of better than 1 nm at a lateral resolution near 0.1 mm to remove high-contrast interference fringes and appreciable wavefront phase errors. Three mirrors that had different surface qualities were tested at the 1 km-long beam line at the SPring-8/Japan Synchrotron Radiation Research Institute. Interference fringes faded when the surface figure was corrected below the subnanometer level to a spatial resolution close to 0.1 mm, as indicated by the simulated results.
Large-scale 3D galaxy correlation function and non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Doré, Olivier; Bertacca, Daniele
We investigate the properties of the 2-point galaxy correlation function at very large scales, including all geometric and local relativistic effects --- wide-angle effects, redshift space distortions, Doppler terms and Sachs-Wolfe type terms in the gravitational potentials. The general three-dimensional correlation function has a nonzero dipole and octupole, in addition to the even multipoles of the flat-sky limit. We study how corrections due to primordial non-Gaussianity and General Relativity affect the multipolar expansion, and we show that they are of similar magnitude (when f{sub NL} is small), so that a relativistic approach is needed. Furthermore, we look at how large-scalemore » corrections depend on the model for the growth rate in the context of modified gravity, and we discuss how a modified growth can affect the non-Gaussian signal in the multipoles.« less
Skewness in large-scale structure and non-Gaussian initial conditions
NASA Technical Reports Server (NTRS)
Fry, J. N.; Scherrer, Robert J.
1994-01-01
We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.
On the optimization of Gaussian basis sets
NASA Astrophysics Data System (ADS)
Petersson, George A.; Zhong, Shijun; Montgomery, John A.; Frisch, Michael J.
2003-01-01
A new procedure for the optimization of the exponents, αj, of Gaussian basis functions, Ylm(ϑ,φ)rle-αjr2, is proposed and evaluated. The direct optimization of the exponents is hindered by the very strong coupling between these nonlinear variational parameters. However, expansion of the logarithms of the exponents in the orthonormal Legendre polynomials, Pk, of the index, j: ln αj=∑k=0kmaxAkPk((2j-2)/(Nprim-1)-1), yields a new set of well-conditioned parameters, Ak, and a complete sequence of well-conditioned exponent optimizations proceeding from the even-tempered basis set (kmax=1) to a fully optimized basis set (kmax=Nprim-1). The error relative to the exact numerical self-consistent field limit for a six-term expansion is consistently no more than 25% larger than the error for the completely optimized basis set. Thus, there is no need to optimize more than six well-conditioned variational parameters, even for the largest sets of Gaussian primitives.
Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion
NASA Astrophysics Data System (ADS)
Zou, Cuiming; Kou, Kit Ian
2018-05-01
Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.
Stanke, Monika; Palikot, Ewa; Kȩdziera, Dariusz; Adamowicz, Ludwik
2016-12-14
An algorithm for calculating the first-order electronic orbit-orbit magnetic interaction correction for an electronic wave function expanded in terms of all-electron explicitly correlated molecular Gaussian (ECG) functions with shifted centers is derived and implemented. The algorithm is tested in calculations concerning the H 2 molecule. It is also applied in calculations for LiH and H 3 + molecular systems. The implementation completes our work on the leading relativistic correction for ECGs and paves the way for very accurate ECG calculations of ground and excited potential energy surfaces (PESs) of small molecules with two and more nuclei and two and more electrons, such as HeH - , H 3 + , HeH 2 + , and LiH 2 + . The PESs will be used to determine rovibrational spectra of the systems.
Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors
NASA Astrophysics Data System (ADS)
Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.
2018-04-01
The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
Improved slow-light performance of 10 Gb/s NRZ, PSBT and DPSK signals in fiber broadband SBS.
Yi, Lilin; Jaouen, Yves; Hu, Weisheng; Su, Yikai; Bigo, Sébastien
2007-12-10
We have demonstrated error-free operations of slow-light via stimulated Brillouin scattering (SBS) in optical fiber for 10-Gb/s signals with different modulation formats, including non-return-to-zero (NRZ), phase-shaped binary transmission (PSBT) and differential phase-shiftkeying (DPSK). The SBS gain bandwidth is broadened by using current noise modulation of the pump laser diode. The gain shape is simply controlled by the noise density function. Super-Gaussian noise modulation of the Brillouin pump allows a flat-top and sharp-edge SBS gain spectrum, which can reduce slow-light induced distortion in case of 10-Gb/s NRZ signal. The corresponding maximal delay-time with error-free operation is 35 ps. Then we propose the PSBT format to minimize distortions resulting from SBS filtering effect and dispersion accompanied with slow light because of its high spectral efficiency and strong dispersion tolerance. The sensitivity of the 10-Gb/s PSBT signal is 5.2 dB better than the NRZ case with a same 35-ps delay. The maximal delay of 51 ps with error-free operation has been achieved. Futhermore, the DPSK format is directly demodulated through a Gaussian-shaped SBS gain, which is achieved using Gaussian-noise modulation of the Brillouin pump. The maximal error-free time delay after demodulation of a 10-Gb/s DPSK signal is as high as 81.5 ps, which is the best demonstrated result for 10-Gb/s slow-light.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-09-21
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units ([Formula: see text]) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into [Formula: see text] was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of [Formula: see text] corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
NASA Astrophysics Data System (ADS)
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-10-01
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
Is 30-second update fast enough for convection-resolving data assimilation?
NASA Astrophysics Data System (ADS)
Miyoshi, Takemasa; Ruiz, Juan; Lien, Guo-Yuan; Teramura, Toshiki; Kondo, Keiichi; Maejima, Yasumitsu; Honda, Takumi; Otsuka, Shigenori
2017-04-01
For local severe weather forecasting at 100-m resolution with 30-minute lead time, we have been working on the "Big Data Assimilation" (BDA) effort for super-rapid 30-second cycle of an ensemble Kalman filter. We have presented two papers with the concept and case studies (Miyoshi et al. 2016, BAMS; Proceedings of the IEEE). We focus on the non-Gaussian PDF in this study. We were hoping that we could assume the Gaussian error distribution in 30-second forecasts before strong nonlinear dynamics distort the error distribution for rapidly-changing convective storms. However, using 1000 ensemble members, the reduced-resolution version of the BDA system at 1-km grid spacing with 30-second updates showed ubiquity of highly non-Gaussian PDF. Although our results so far with multiple case studies were quite successful, this gives us a doubt about our Gaussian assumption even if the data assimilation interval is short enough compared with the system's chaotic time scale. We therefore pose a question if the 30-second update is fast enough for convection-resolving data assimilation under the Gaussian assumption. To answer this question, we aim to gain combined knowledge from BDA case studies, 1000-member experiments, 30-second breeding experiments, and toy-model experiments with dense and frequent observations. In this presentation, we will show the most up-to-date results of the BDA research, and will discuss about the question if the 30-second update is fast enough for convective-scale data assimilation.
NASA Astrophysics Data System (ADS)
Murillo Feo, C. A.; Martnez Martinez, L. J.; Correa Muñoz, N. A.
2016-06-01
The accuracy of locating attributes on topographic surfaces when, using GPS in mountainous areas, is affected by obstacles to wave propagation. As part of this research on the semi-automatic detection of landslides, we evaluate the accuracy and spatial distribution of the horizontal error in GPS positioning in the tertiary road network of six municipalities located in mountainous areas in the department of Cauca, Colombia, using geo-referencing with GPS mapping equipment and static-fast and pseudo-kinematic methods. We obtained quality parameters for the GPS surveys with differential correction, using a post-processing method. The consolidated database underwent exploratory analyses to determine the statistical distribution, a multivariate analysis to establish relationships and partnerships between the variables, and an analysis of the spatial variability and calculus of accuracy, considering the effect of non-Gaussian distribution errors. The evaluation of the internal validity of the data provide metrics with a confidence level of 95% between 1.24 and 2.45 m in the static-fast mode and between 0.86 and 4.2 m in the pseudo-kinematic mode. The external validity had an absolute error of 4.69 m, indicating that this descriptor is more critical than precision. Based on the ASPRS standard, the scale obtained with the evaluated equipment was in the order of 1:20000, a level of detail expected in the landslide-mapping project. Modelling the spatial variability of the horizontal errors from the empirical semi-variogram analysis showed predictions errors close to the external validity of the devices.
New decoding methods of interleaved burst error-correcting codes
NASA Astrophysics Data System (ADS)
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
NASA Astrophysics Data System (ADS)
Karagiannis, Dionysios; Lazanu, Andrei; Liguori, Michele; Raccanelli, Alvise; Bartolo, Nicola; Verde, Licia
2018-07-01
We forecast constraints on primordial non-Gaussianity (PNG) and bias parameters from measurements of galaxy power spectrum and bispectrum in future radio continuum and optical surveys. In the galaxy bispectrum, we consider a comprehensive list of effects, including the bias expansion for non-Gaussian initial conditions up to second order, redshift space distortions, redshift uncertainties and theoretical errors. These effects are all combined in a single PNG forecast for the first time. Moreover, we improve the bispectrum modelling over previous forecasts, by accounting for trispectrum contributions. All effects have an impact on final predicted bounds, which varies with the type of survey. We find that the bispectrum can lead to improvements up to a factor ˜5 over bounds based on the power spectrum alone, leading to significantly better constraints for local-type PNG, with respect to current limits from Planck. Future radio and photometric surveys could obtain a measurement error of σ (f_{NL}^{loc}) ≈ 0.2. In the case of equilateral PNG, galaxy bispectrum can improve upon present bounds only if significant improvements in the redshift determinations of future, large volume, photometric or radio surveys could be achieved. For orthogonal non-Gaussianity, expected constraints are generally comparable to current ones.
NASA Astrophysics Data System (ADS)
Karagiannis, Dionysios; Lazanu, Andrei; Liguori, Michele; Raccanelli, Alvise; Bartolo, Nicola; Verde, Licia
2018-04-01
We forecast constraints on primordial non-Gaussianity (PNG) and bias parameters from measurements of galaxy power spectrum and bispectrum in future radio continuum and optical surveys. In the galaxy bispectrum, we consider a comprehensive list of effects, including the bias expansion for non-Gaussian initial conditions up to second order, redshift space distortions, redshift uncertainties and theoretical errors. These effects are all combined in a single PNG forecast for the first time. Moreover, we improve the bispectrum modelling over previous forecasts, by accounting for trispectrum contributions. All effects have an impact on final predicted bounds, which varies with the type of survey. We find that the bispectrum can lead to improvements up to a factor ˜5 over bounds based on the power spectrum alone, leading to significantly better constraints for local-type PNG, with respect to current limits from Planck. Future radio and photometric surveys could obtain a measurement error of σ (f_{NL}^{loc}) ≈ 0.2. In the case of equilateral PNG, galaxy bispectrum can improve upon present bounds only if significant improvements in the redshift determinations of future, large volume, photometric or radio surveys could be achieved. For orthogonal non-Gaussianity, expected constraints are generally comparable to current ones.
Target 3-D reconstruction of streak tube imaging lidar based on Gaussian fitting
NASA Astrophysics Data System (ADS)
Yuan, Qingyu; Niu, Lihong; Hu, Cuichun; Wu, Lei; Yang, Hongru; Yu, Bing
2018-02-01
Streak images obtained by the streak tube imaging lidar (STIL) contain the distance-azimuth-intensity information of a scanned target, and a 3-D reconstruction of the target can be carried out through extracting the characteristic data of multiple streak images. Significant errors will be caused in the reconstruction result by the peak detection method due to noise and other factors. So as to get a more precise 3-D reconstruction, a peak detection method based on Gaussian fitting of trust region is proposed in this work. Gaussian modeling is performed on the returned wave of single time channel of each frame, then the modeling result which can effectively reduce the noise interference and possesses a unique peak could be taken as the new returned waveform, lastly extracting its feature data through peak detection. The experimental data of aerial target is for verifying this method. This work shows that the peak detection method based on Gaussian fitting reduces the extraction error of the feature data to less than 10%; utilizing this method to extract the feature data and reconstruct the target make it possible to realize the spatial resolution with a minimum 30 cm in the depth direction, and improve the 3-D imaging accuracy of the STIL concurrently.
What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?
NASA Astrophysics Data System (ADS)
Liebovitch, Larry
1998-03-01
The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate
ERIC Educational Resources Information Center
Polio, Charlene
2012-01-01
The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…
Renyi entropy measures of heart rate Gaussianity.
Lake, Douglas E
2006-01-01
Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.
Zhou, Mu; Xu, Yu Bin; Ma, Lin; Tian, Shuo
2012-01-01
The expected errors of RADAR sensor networks with linear probabilistic location fingerprints inside buildings with varying Wi-Fi Gaussian strength are discussed. As far as we know, the statistical errors of equal and unequal-weighted RADAR networks have been suggested as a better way to evaluate the behavior of different system parameters and the deployment of reference points (RPs). However, up to now, there is still not enough related work on the relations between the statistical errors, system parameters, number and interval of the RPs, let alone calculating the correlated analytical expressions of concern. Therefore, in response to this compelling problem, under a simple linear distribution model, much attention will be paid to the mathematical relations of the linear expected errors, number of neighbors, number and interval of RPs, parameters in logarithmic attenuation model and variations of radio signal strength (RSS) at the test point (TP) with the purpose of constructing more practical and reliable RADAR location sensor networks (RLSNs) and also guaranteeing the accuracy requirements for the location based services in future ubiquitous context-awareness environments. Moreover, the numerical results and some real experimental evaluations of the error theories addressed in this paper will also be presented for our future extended analysis. PMID:22737027
Zhou, Mu; Xu, Yu Bin; Ma, Lin; Tian, Shuo
2012-01-01
The expected errors of RADAR sensor networks with linear probabilistic location fingerprints inside buildings with varying Wi-Fi Gaussian strength are discussed. As far as we know, the statistical errors of equal and unequal-weighted RADAR networks have been suggested as a better way to evaluate the behavior of different system parameters and the deployment of reference points (RPs). However, up to now, there is still not enough related work on the relations between the statistical errors, system parameters, number and interval of the RPs, let alone calculating the correlated analytical expressions of concern. Therefore, in response to this compelling problem, under a simple linear distribution model, much attention will be paid to the mathematical relations of the linear expected errors, number of neighbors, number and interval of RPs, parameters in logarithmic attenuation model and variations of radio signal strength (RSS) at the test point (TP) with the purpose of constructing more practical and reliable RADAR location sensor networks (RLSNs) and also guaranteeing the accuracy requirements for the location based services in future ubiquitous context-awareness environments. Moreover, the numerical results and some real experimental evaluations of the error theories addressed in this paper will also be presented for our future extended analysis.
POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models
Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.
2014-01-01
The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Electron acceleration by a tightly focused Hermite-Gaussian beam: higher-order corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Zhiguo; Institute of Laser Physics and Chemistry, Sichuan University, Chengdu 610064; Yang Dangxiao
2008-03-15
Taking the TEM{sub 1,0}-mode Hermite-Gaussian (H-G) beam as a numerical calculation example, and based on the method of the perturbation series expansion, the higher-order field corrections of H-G beams are derived and used to study the electron acceleration by a tightly focused H-G beam in vacuum. For the case of the off-axis injection the field corrections to the terms of order f{sup 3} (f=1/kw{sub 0}, k and w{sub 0} being the wavenumber and waist width, respectively) are considered, and for the case of the on-axis injection the contributions of the terms of higher orders are negligible. By a suitable optimizationmore » of injection parameters the energy gain in the giga-electron-volt regime can be achieved.« less
Simple Form of MMSE Estimator for Super-Gaussian Prior Densities
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-04-01
The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.
Survey of Radar Refraction Error Corrections
2016-11-01
ELECTRONIC TRAJECTORY MEASUREMENTS GROUP RCC 266-16 SURVEY OF RADAR REFRACTION ERROR CORRECTIONS DISTRIBUTION A: Approved for...DOCUMENT 266-16 SURVEY OF RADAR REFRACTION ERROR CORRECTIONS November 2016 Prepared by Electronic...This page intentionally left blank. Survey of Radar Refraction Error Corrections, RCC 266-16 iii Table of Contents Preface
Investigating task inhibition in children versus adults: A diffusion model analysis.
Schuch, Stefanie; Konrad, Kerstin
2017-04-01
One can take n-2 task repetition costs as a measure of inhibition on the level of task sets. When switching back to a Task A after only one intermediate trial (ABA task sequence), Task A is thought to still be inhibited, leading to performance costs relative to task sequences where switching back to Task A is preceded by at least two intermediary trials (CBA). The current study investigated differences in inhibitory ability between children and adults by comparing n-2 task repetition costs in children (9-11years of age, N=32) and young adults (21-30years of age, N=32). The mean reaction times and error rate differences between ABA and CBA sequences did not differ between the two age groups. However, diffusion model analysis revealed that different cognitive processes contribute to the inhibition effect in the two age groups: The adults, but not the children, showed a smaller drift rate in ABA than in CBA, suggesting that persisting task inhibition is associated with slower response selection in adults. In children, non-decision time was longer in ABA than in CBA, possibly reflecting longer task preparation in ABA than in CBA. In addition, Ex-Gaussian functions were fitted to the distributions of correct reaction times. In adults, the ABA-CBA difference was reflected in the exponential parameter of the distribution; in children, the ABA-CBA difference was found in the Gaussian mu parameter. Hence, Ex-Gaussian analysis, although noisier, was generally in line with diffusion model analysis. Taken together, the data suggest that the task inhibition effect found in mean performance is mediated by different cognitive processes in children versus adults. Copyright © 2016 Elsevier Inc. All rights reserved.
Edgeworth streaming model for redshift space distortions
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael; Haugg, Thomas
2015-09-01
We derive the Edgeworth streaming model (ESM) for the redshift space correlation function starting from an arbitrary distribution function for biased tracers of dark matter by considering its two-point statistics and show that it reduces to the Gaussian streaming model (GSM) when neglecting non-Gaussianities. We test the accuracy of the GSM and ESM independent of perturbation theory using the Horizon Run 2 N -body halo catalog. While the monopole of the redshift space halo correlation function is well described by the GSM, higher multipoles improve upon including the leading order non-Gaussian correction in the ESM: the GSM quadrupole breaks down on scales below 30 Mpc /h whereas the ESM stays accurate to 2% within statistical errors down to 10 Mpc /h . To predict the scale-dependent functions entering the streaming model we employ convolution Lagrangian perturbation theory (CLPT) based on the dust model and local Lagrangian bias. Since dark matter halos carry an intrinsic length scale given by their Lagrangian radius, we extend CLPT to the coarse-grained dust model and consider two different smoothing approaches operating in Eulerian and Lagrangian space, respectively. The coarse graining in Eulerian space features modified fluid dynamics different from dust while the coarse graining in Lagrangian space is performed in the initial conditions with subsequent single-streaming dust dynamics, implemented by smoothing the initial power spectrum in the spirit of the truncated Zel'dovich approximation. Finally, we compare the predictions of the different coarse-grained models for the streaming model ingredients to N -body measurements and comment on the proper choice of both the tracer distribution function and the smoothing scale. Since the perturbative methods we considered are not yet accurate enough on small scales, the GSM is sufficient when applied to perturbation theory.
NASA Astrophysics Data System (ADS)
Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry
2012-05-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).
NASA Astrophysics Data System (ADS)
Eyyuboğlu, Halil T.
2018-05-01
We examine the mode coupling in vortex beams. Mode coupling also known as the crosstalk takes place due to turbulent characteristics of the atmospheric communication medium. This way, the transmitted intrinsic mode of the vortex beam leaks power to other extrinsic modes, thus preventing the correct detection of the transmitted symbol which is usually encoded into the mode index or the orbital angular momentum state of the vortex beam. Here we investigate the normalized power mode coupling ratios of several types of vortex beams, namely, Gaussian vortex beam, Bessel Gaussian beam, hypergeometric Gaussian beam and Laguerre Gaussian beam. It is found that smaller mode numbers lead to less mode coupling. The same is partially observed for increasing source sizes. Comparing the vortex beams amongst themselves, it is seen that hypergeometric Gaussian beam is the one retaining the most power in intrinsic mode during propagation, but only at lowest mode index of unity. At higher mode indices this advantage passes over to the Gaussian vortex beam.
NASA Astrophysics Data System (ADS)
Thelen, Brian J.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
In Bayesian decision theory, there has been a great amount of research into theoretical frameworks and information- theoretic quantities that can be used to provide lower and upper bounds for the Bayes error. These include well-known bounds such as Chernoff, Battacharrya, and J-divergence. Part of the challenge of utilizing these various metrics in practice is (i) whether they are "loose" or "tight" bounds, (ii) how they might be estimated via either parametric or non-parametric methods, and (iii) how accurate the estimates are for limited amounts of data. In general what is desired is a methodology for generating relatively tight lower and upper bounds, and then an approach to estimate these bounds efficiently from data. In this paper, we explore the so-called triangle divergence which has been around for a while, but was recently made more prominent in some recent research on non-parametric estimation of information metrics. Part of this work is motivated by applications for quantifying fundamental information content in SAR/LIDAR data, and to help in this, we have developed a flexible multivariate modeling framework based on multivariate Gaussian copula models which can be combined with the triangle divergence framework to quantify this information, and provide approximate bounds on Bayes error. In this paper we present an overview of the bounds, including those based on triangle divergence and verify that under a number of multivariate models, the upper and lower bounds derived from triangle divergence are significantly tighter than the other common bounds, and often times, dramatically so. We also propose some simple but effective means for computing the triangle divergence using Monte Carlo methods, and then discuss estimation of the triangle divergence from empirical data based on Gaussian Copula models.
NASA Astrophysics Data System (ADS)
Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko
2015-01-01
Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.
Rate-compatible punctured convolutional codes (RCPC codes) and their applications
NASA Astrophysics Data System (ADS)
Hagenauer, Joachim
1988-04-01
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P + l), where l can be varied between 1 and (N - 1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. The application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimize throughput.
Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob
2016-09-01
Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.
NASA Technical Reports Server (NTRS)
Reimers, J. R.; Heller, E. J.
1985-01-01
Exact eigenfunctions for a two-dimensional rigid rotor are obtained using Gaussian wave packet dynamics. The wave functions are obtained by propagating, without approximation, an infinite set of Gaussian wave packets that collectively have the correct periodicity, being coherent states appropriate to this rotational problem. This result leads to a numerical method for the semiclassical calculation of rovibrational, molecular eigenstates. Also, a simple, almost classical, approximation to full wave packet dynamics is shown to give exact results: this leads to an a posteriori justification of the De Leon-Heller spectral quantization method.
Comparing the Effectiveness of Error-Correction Strategies in Discrete Trial Training
ERIC Educational Resources Information Center
Turan, Michelle K.; Moroz, Lianne; Croteau, Natalie Paquet
2012-01-01
Error-correction strategies are essential considerations for behavior analysts implementing discrete trial training with children with autism. The research literature, however, is still lacking in the number of studies that compare and evaluate error-correction procedures. The purpose of this study was to compare two error-correction strategies:…
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
A comparative study of nonparametric methods for pattern recognition
NASA Technical Reports Server (NTRS)
Hahn, S. F.; Nelson, G. D.
1972-01-01
The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.
Manzhos, Sergei; Carrington, Tucker
2016-12-14
We demonstrate that it is possible to use basis functions that depend on curvilinear internal coordinates to compute vibrational energy levels without deriving a kinetic energy operator (KEO) and without numerically computing coefficients of a KEO. This is done by using a space-fixed KEO and computing KEO matrix elements numerically. Whenever one has an excellent basis, more accurate solutions to the Schrödinger equation can be obtained by computing the KEO, potential, and overlap matrix elements numerically. Using a Gaussian basis and bond coordinates, we compute vibrational energy levels of formaldehyde. We show, for the first time, that it is possible with a Gaussian basis to solve a six-dimensional vibrational Schrödinger equation. For the zero-point energy (ZPE) and the lowest 50 vibrational transitions of H 2 CO, we obtain a mean absolute error of less than 1 cm -1 ; with 200 000 collocation points and 40 000 basis functions, most errors are less than 0.4 cm -1 .
NASA Astrophysics Data System (ADS)
Manzhos, Sergei; Carrington, Tucker
2016-12-01
We demonstrate that it is possible to use basis functions that depend on curvilinear internal coordinates to compute vibrational energy levels without deriving a kinetic energy operator (KEO) and without numerically computing coefficients of a KEO. This is done by using a space-fixed KEO and computing KEO matrix elements numerically. Whenever one has an excellent basis, more accurate solutions to the Schrödinger equation can be obtained by computing the KEO, potential, and overlap matrix elements numerically. Using a Gaussian basis and bond coordinates, we compute vibrational energy levels of formaldehyde. We show, for the first time, that it is possible with a Gaussian basis to solve a six-dimensional vibrational Schrödinger equation. For the zero-point energy (ZPE) and the lowest 50 vibrational transitions of H2CO, we obtain a mean absolute error of less than 1 cm-1; with 200 000 collocation points and 40 000 basis functions, most errors are less than 0.4 cm-1.
Spatio-thermal depth correction of RGB-D sensors based on Gaussian processes in real-time
NASA Astrophysics Data System (ADS)
Heindl, Christoph; Pönitz, Thomas; Stübl, Gernot; Pichler, Andreas; Scharinger, Josef
2018-04-01
Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available.
An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.
Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe
2014-03-01
The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Subthreshold muscle twitches dissociate oscillatory neural signatures of conflicts from errors.
Cohen, Michael X; van Gaal, Simon
2014-02-01
We investigated the neural systems underlying conflict detection and error monitoring during rapid online error correction/monitoring mechanisms. We combined data from four separate cognitive tasks and 64 subjects in which EEG and EMG (muscle activity from the thumb used to respond) were recorded. In typical neuroscience experiments, behavioral responses are classified as "error" or "correct"; however, closer inspection of our data revealed that correct responses were often accompanied by "partial errors" - a muscle twitch of the incorrect hand ("mixed correct trials," ~13% of the trials). We found that these muscle twitches dissociated conflicts from errors in time-frequency domain analyses of EEG data. In particular, both mixed-correct trials and full error trials were associated with enhanced theta-band power (4-9Hz) compared to correct trials. However, full errors were additionally associated with power and frontal-parietal synchrony in the delta band. Single-trial robust multiple regression analyses revealed a significant modulation of theta power as a function of partial error correction time, thus linking trial-to-trial fluctuations in power to conflict. Furthermore, single-trial correlation analyses revealed a qualitative dissociation between conflict and error processing, such that mixed correct trials were associated with positive theta-RT correlations whereas full error trials were associated with negative delta-RT correlations. These findings shed new light on the local and global network mechanisms of conflict monitoring and error detection, and their relationship to online action adjustment. © 2013.
On the Bar Pattern Speed Determination of NGC 3367
NASA Astrophysics Data System (ADS)
Gabbasov, R. F.; Repetto, P.; Rosado, M.
2009-09-01
An important dynamic parameter of barred galaxies is the bar pattern speed, Ω P . Among several methods that are used for the determination of Ω P , the Tremaine-Weinberg method has the advantage of model independence and accuracy. In this work, we apply the method to a simulated bar including gas dynamics and study the effect of two-dimensional spectroscopy data quality on robustness of the method. We added white noise and a Gaussian random field to the data and measured the corresponding errors in Ω P . We found that a signal to noise ratio in surface density ~5 introduces errors of ~20% for the Gaussian noise, while for the white noise the corresponding errors reach ~50%. At the same time, the velocity field is less sensitive to contamination. On the basis of the performed study, we applied the method to the NGC 3367 spiral galaxy using Hα Fabry-Pérot interferometry data. We found Ω P = 43 ± 6 km s-1 kpc-1 for this galaxy.
Performance metrics for the assessment of satellite data products: an ocean color case study
Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy
2018-01-01
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Non-Gaussianities due to relativistic corrections to the observed galaxy bispectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dio, E. Di; Perrier, H.; Durrer, R.
2017-03-01
High-precision constraints on primordial non-Gaussianity (PNG) will significantly improve our understanding of the physics of the early universe. Among all the subtleties in using large scale structure observables to constrain PNG, accounting for relativistic corrections to the clustering statistics is particularly important for the upcoming galaxy surveys covering progressively larger fraction of the sky. We focus on relativistic projection effects due to the fact that we observe the galaxies through the light that reaches the telescope on perturbed geodesics. These projection effects can give rise to an effective f {sub NL} that can be misinterpreted as the primordial non-Gaussianity signalmore » and hence is a systematic to be carefully computed and accounted for in modelling of the bispectrum. We develop the technique to properly account for relativistic effects in terms of purely observable quantities, namely angles and redshifts. We give some examples by applying this approach to a subset of the contributions to the tree-level bispectrum of the observed galaxy number counts calculated within perturbation theory and estimate the corresponding non-Gaussianity parameter, f {sub NL}, for the local, equilateral and orthogonal shapes. For the local shape, we also compute the local non-Gaussianity resulting from terms obtained using the consistency relation for observed number counts. Our goal here is not to give a precise estimate of f {sub NL} for each shape but rather we aim to provide a scheme to compute the non-Gaussian contamination due to relativistic projection effects. For the terms considered in this work, we obtain contamination of f {sub NL}{sup loc} ∼ O(1).« less
Time-dependent phase error correction using digital waveform synthesis
Doerry, Armin W.; Buskirk, Stephen
2017-10-10
The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.
The Measurement and Correction of the Periodic Error of the LX200-16 Telescope Driving System
NASA Astrophysics Data System (ADS)
Jeong, Jang Hae; Lee, Young Sam; Lee, Chung Uk
2000-06-01
We examined and corrected the periodic error of the LX200-16 Telescope driving system of Chungbuk National University Campus Observatory. Before correcting, the standard deviation of the periodic error in the direction of East-West was = 7.''2. After correcting,we found that the periodic error was reduced to = 1.''2.
Hochman, Eldad Yitzhak; Orr, Joseph M; Gehring, William J
2014-02-01
Cognitive control in the posterior medial frontal cortex (pMFC) is formulated in models that emphasize adaptive behavior driven by a computation evaluating the degree of difference between 2 conflicting responses. These functions are manifested by an event-related brain potential component coined the error-related negativity (ERN). We hypothesized that the ERN represents a regulative rather than evaluative pMFC process, exerted over the error motor representation, expediting the execution of a corrective response. We manipulated the motor representations of the error and the correct response to varying degrees. The ERN was greater when 1) the error response was more potent than when the correct response was more potent, 2) more errors were committed, 3) fewer and slower corrections were observed, and 4) the error response shared fewer motor features with the correct response. In their current forms, several prominent models of the pMFC cannot be reconciled with these findings. We suggest that a prepotent, unintended error is prone to reach the manual motor processor responsible for response execution before a nonpotent, intended correct response. In this case, the correct response is a correction and its execution must wait until the error is aborted. The ERN may reflect pMFC activity that aimed to suppress the error.
Correcting false memories: Errors must be noticed and replaced.
Mullet, Hillary G; Marsh, Elizabeth J
2016-04-01
Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.
Linear theory for filtering nonlinear multiscale systems with model error
Berry, Tyrus; Harlim, John
2014-01-01
In this paper, we study filtering of multiscale dynamical systems with model error arising from limitations in resolving the smaller scale processes. In particular, the analysis assumes the availability of continuous-time noisy observations of all components of the slow variables. Mathematically, this paper presents new results on higher order asymptotic expansion of the first two moments of a conditional measure. In particular, we are interested in the application of filtering multiscale problems in which the conditional distribution is defined over the slow variables, given noisy observation of the slow variables alone. From the mathematical analysis, we learn that for a continuous time linear model with Gaussian noise, there exists a unique choice of parameters in a linear reduced model for the slow variables which gives the optimal filtering when only the slow variables are observed. Moreover, these parameters simultaneously give the optimal equilibrium statistical estimates of the underlying system, and as a consequence they can be estimated offline from the equilibrium statistics of the true signal. By examining a nonlinear test model, we show that the linear theory extends in this non-Gaussian, nonlinear configuration as long as we know the optimal stochastic parametrization and the correct observation model. However, when the stochastic parametrization model is inappropriate, parameters chosen for good filter performance may give poor equilibrium statistical estimates and vice versa; this finding is based on analytical and numerical results on our nonlinear test model and the two-layer Lorenz-96 model. Finally, even when the correct stochastic ansatz is given, it is imperative to estimate the parameters simultaneously and to account for the nonlinear feedback of the stochastic parameters into the reduced filter estimates. In numerical experiments on the two-layer Lorenz-96 model, we find that the parameters estimated online, as part of a filtering procedure, simultaneously produce accurate filtering and equilibrium statistical prediction. In contrast, an offline estimation technique based on a linear regression, which fits the parameters to a training dataset without using the filter, yields filter estimates which are worse than the observations or even divergent when the slow variables are not fully observed. This finding does not imply that all offline methods are inherently inferior to the online method for nonlinear estimation problems, it only suggests that an ideal estimation technique should estimate all parameters simultaneously whether it is online or offline. PMID:25002829
Camera, Stefano; Santos, Mário G; Ferreira, Pedro G; Ferramacho, Luís
2013-10-25
The large-scale structure of the Universe supplies crucial information about the physical processes at play at early times. Unresolved maps of the intensity of 21 cm emission from neutral hydrogen HI at redshifts z=/~1-5 are the best hope of accessing the ultralarge-scale information, directly related to the early Universe. A purpose-built HI intensity experiment may be used to detect the large scale effects of primordial non-Gaussianity, placing stringent bounds on different models of inflation. We argue that it may be possible to place tight constraints on the non-Gaussianity parameter f(NL), with an error close to σ(f(NL))~1.
An introduction of component fusion extend Kalman filtering method
NASA Astrophysics Data System (ADS)
Geng, Yue; Lei, Xusheng
2018-05-01
In this paper, the Component Fusion Extend Kalman Filtering (CFEKF) algorithm is proposed. Assuming each component of error propagation are independent with Gaussian distribution. The CFEKF can be obtained through the maximum likelihood of propagation error, which can adjust the state transition matrix and the measured matrix adaptively. With minimize linearization error, CFEKF can an effectively improve the estimation accuracy of nonlinear system state. The computation of CFEKF is similar to EKF which is easy for application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoiber, Eva Maria, E-mail: eva.stoiber@med.uni-heidelberg.de; Department of Medical Physics, German Cancer Research Center, Heidelberg; Giske, Kristina
Purpose: To evaluate local positioning errors of the lumbar spine during fractionated intensity-modulated radiotherapy of patients treated with craniospinal irradiation and to assess the impact of rotational error correction on these uncertainties for one patient setup correction strategy. Methods and Materials: 8 patients (6 adults, 2 children) treated with helical tomotherapy for craniospinal irradiation were retrospectively chosen for this analysis. Patients were immobilized with a deep-drawn Aquaplast head mask. Additionally to daily megavoltage control computed tomography scans of the skull, once-a-week positioning of the lumbar spine was assessed. Therefore, patient setup was corrected by a target point correction, derived frommore » a registration of the patient's skull. The residual positioning variations of the lumbar spine were evaluated applying a rigid-registration algorithm. The impact of different rotational error corrections was simulated. Results: After target point correction, residual local positioning errors of the lumbar spine varied considerably. Craniocaudal axis rotational error correction did not improve or deteriorate these translational errors, whereas simulation of a rotational error correction of the right-left and anterior-posterior axis increased these errors by a factor of 2 to 3. Conclusion: The patient fixation used allows for deformations between the patient's skull and spine. Therefore, for the setup correction strategy evaluated in this study, generous margins for the lumbar spinal target volume are needed to prevent a local geographic miss. With any applied correction strategy, it needs to be evaluated whether or not a rotational error correction is beneficial.« less
Linear quadratic Gaussian and feedforward controllers for the DSS-13 antenna
NASA Technical Reports Server (NTRS)
Gawronski, W. K.; Racho, C. S.; Mellstrom, J. A.
1994-01-01
The controller development and the tracking performance evaluation for the DSS-13 antenna are presented. A trajectory preprocessor, linear quadratic Gaussian (LQG) controller, feedforward controller, and their combination were designed, built, analyzed, and tested. The antenna exhibits nonlinear behavior when the input to the antenna and/or the derivative of this input exceeds the imposed limits; for slewing and acquisition commands, these limits are typically violated. A trajectory preprocessor was designed to ensure that the antenna behaves linearly, just to prevent nonlinear limit cycling. The estimator model for the LQG controller was identified from the data obtained from the field test. Based on an LQG balanced representation, a reduced-order LQG controller was obtained. The feedforward controller and the combination of the LQG and feedforward controller were also investigated. The performance of the controllers was evaluated with the tracking errors (due to following a trajectory) and the disturbance errors (due to the disturbances acting on the antenna). The LQG controller has good disturbance rejection properties and satisfactory tracking errors. The feedforward controller has small tracking errors but poor disturbance rejection properties. The combined LQG and feedforward controller exhibits small tracking errors as well as good disturbance rejection properties. However, the cost for this performance is the complexity of the controller.
Zaboikin, Michail; Freter, Carl
2018-01-01
We describe a method for measuring genome editing efficiency from in silico analysis of high-resolution melt curve data. The melt curve data derived from amplicons of genome-edited or unmodified target sites were processed to remove the background fluorescent signal emanating from free fluorophore and then corrected for temperature-dependent quenching of fluorescence of double-stranded DNA-bound fluorophore. Corrected data were normalized and numerically differentiated to obtain the first derivatives of the melt curves. These were then mathematically modeled as a sum or superposition of minimal number of Gaussian components. Using Gaussian parameters determined by modeling of melt curve derivatives of unedited samples, we were able to model melt curve derivatives from genetically altered target sites where the mutant population could be accommodated using an additional Gaussian component. From this, the proportion contributed by the mutant component in the target region amplicon could be accurately determined. Mutant component computations compared well with the mutant frequency determination from next generation sequencing data. The results were also consistent with our earlier studies that used difference curve areas from high-resolution melt curves for determining the efficiency of genome-editing reagents. The advantage of the described method is that it does not require calibration curves to estimate proportion of mutants in amplicons of genome-edited target sites. PMID:29300734
5 CFR 1601.34 - Error correction.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Error correction. 1601.34 Section 1601.34 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD PARTICIPANTS' CHOICES OF TSP FUNDS... in the wrong investment fund, will be corrected in accordance with the error correction regulations...
5 CFR 1601.34 - Error correction.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34 Administrative Personnel FEDERAL RETIREMENT THRIFT INVESTMENT BOARD PARTICIPANTS' CHOICES OF TSP FUNDS... in the wrong investment fund, will be corrected in accordance with the error correction regulations...
A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series
NASA Astrophysics Data System (ADS)
Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team
2011-01-01
In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
Piñero, David P; Camps, Vicente J; Caravaca-Arens, Esteban; de Fez, Dolores; Blanes-Mompó, Francisco J
2017-01-01
To analyze the errors associated to corneal power calculation using the keratometric approach in keratoconus eyes after accelerated corneal collagen crosslinking (CXL) surgery and to obtain a model for the estimation of an adjusted corneal refractive index ( n k adj ) minimizing such errors. Potential differences (Δ P c ) among keratometric ( P k ) and Gaussian corneal power ( P c Gauss ) were simulated. Three algorithms based on the use of n k adj for the estimation of an adjusted keratometric corneal power ( P k adj ) were developed. The agreement between P k (1.3375) (keratometric power using the keratometric index of 1.3375), P c Gauss , and P k adj was evaluated. The validity of the algorithm developed was investigated in 21 keratoconus eyes undergoing accelerated CXL. P k (1.3375) overestimated corneal power between 0.3 and 3.2 D in theoretical simulations and between 0.8 and 2.9 D in the clinical study (Δ P c ). Three linear equations were defined for n k adj to be used for different ranges of r 1c . In the clinical study, differences between P k adj and P c Gauss did not exceed ±0.8 D n k = 1.3375. No statistically significant differences were found between P k adj and P c Gauss ( p > 0.05) and P k (1.3375) and P k adj ( p < 0.001). The use of the keratometric approach in keratoconus eyes after accelerated CXL can lead to significant clinical errors. These errors can be minimized with an adjusted keratometric approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Zongtang; Both, Johan; Li, Shenggang
The heats of formation and the normalized clustering energies (NCEs) for the group 4 and group 6 transition metal oxide (TMO) trimers and tetramers have been calculated by the Feller-Peterson-Dixon (FPD) method. The heats of formation predicted by the FPD method do not differ much from those previously derived from the NCEs at the CCSD(T)/aT level except for the CrO3 nanoclusters. New and improved heats of formation for Cr3O9 and Cr4O12 were obtained using PW91 orbitals instead of Hartree-Fock (HF) orbitals. Diffuse functions are necessary to predict accurate heats of formation. The fluoride affinities (FAs) are calculated with the CCSD(T)more » method. The relative energies (REs) of different isomers, NCEs, electron affinities (EAs), and FAs of (MO2)n ( M = Ti, Zr, Hf, n = 1 – 4 ) and (MO3)n ( M = Cr, Mo, W, n = 1 – 3) clusters have been benchmarked with 55 exchange-correlation DFT functionals including both pure and hybrid types. The absolute errors of the DFT results are mostly less than ±10 kcal/mol for the NCEs and the EAs, and less than ±15 kcal/mol for the FAs. Hybrid functionals usually perform better than the pure functionals for the REs and NCEs. The performance of the two types of functionals in predicting EAs and FAs is comparable. The B1B95 and PBE1PBE functionals provide reliable energetic properties for most isomers. Long range corrected pure functionals usually give poor FAs. The standard deviation of the absolute error is always close to the mean errors and the probability distributions of the DFT errors are often not Gaussian (normal). The breadth of the distribution of errors and the maximum probability are dependent on the energy property and the isomer.« less
Estimate of higher order ionospheric errors in GNSS positioning
NASA Astrophysics Data System (ADS)
Hoque, M. Mainul; Jakowski, N.
2008-10-01
Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.
Automated error correction in IBM quantum computer and explicit generalization
NASA Astrophysics Data System (ADS)
Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.
2018-06-01
Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.
Error Correcting Optical Mapping Data.
Mukherjee, Kingshuk; Washimkar, Darshan; Muggli, Martin D; Salmela, Leena; Boucher, Christina
2018-05-26
Optical mapping is a unique system that is capable of producing high-resolution, high-throughput genomic map data that gives information about the structure of a genome [21]. Recently it has been used for scaffolding contigs and assembly validation for large-scale sequencing projects, including the maize [32], goat [6], and amborella [4] genomes. However, a major impediment in the use of this data is the variety and quantity of errors in the raw optical mapping data, which are called Rmaps. The challenges associated with using Rmap data are analogous to dealing with insertions and deletions in the alignment of long reads. Moreover, they are arguably harder to tackle since the data is numerical and susceptible to inaccuracy. We develop cOMET to error correct Rmap data, which to the best of our knowledge is the only optical mapping error correction method. Our experimental results demonstrate that cOMET has high prevision and corrects 82.49% of insertion errors and 77.38% of deletion errors in Rmap data generated from the E. coli K-12 reference genome. Out of the deletion errors corrected, 98.26% are true errors. Similarly, out of the insertion errors corrected, 82.19% are true errors. It also successfully scales to large genomes, improving the quality of 78% and 99% of the Rmaps in the plum and goat genomes, respectively. Lastly, we show the utility of error correction by demonstrating how it improves the assembly of Rmap data. Error corrected Rmap data results in an assembly that is more contiguous, and covers a larger fraction of the genome.
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-01-01
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi’s model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments. PMID:29401668
Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue
2018-02-03
One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heavens, Alan F.
2018-01-01
We investigate whether a Gaussian likelihood, as routinely assumed in the analysis of cosmological data, is supported by simulated survey data. We define test statistics, based on a novel method that first destroys Gaussian correlations in a data set, and then measures the non-Gaussian correlations that remain. This procedure flags pairs of data points that depend on each other in a non-Gaussian fashion, and thereby identifies where the assumption of a Gaussian likelihood breaks down. Using this diagnosis, we find that non-Gaussian correlations in the CFHTLenS cosmic shear correlation functions are significant. With a simple exclusion of the most contaminated data points, the posterior for s8 is shifted without broadening, but we find no significant reduction in the tension with s8 derived from Planck cosmic microwave background data. However, we also show that the one-point distributions of the correlation statistics are noticeably skewed, such that sound weak-lensing data sets are intrinsically likely to lead to a systematically low lensing amplitude being inferred. The detected non-Gaussianities get larger with increasing angular scale such that for future wide-angle surveys such as Euclid or LSST, with their very small statistical errors, the large-scale modes are expected to be increasingly affected. The shifts in posteriors may then not be negligible and we recommend that these diagnostic tests be run as part of future analyses.
On the cause of the non-Gaussian distribution of residuals in geomagnetism
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.
2017-12-01
To describe errors in the data, Gaussian distributions naturally come to mind. In many practical instances, indeed, Gaussian distributions are appropriate. In the broad field of geomagnetism, however, it has repeatedly been noted that residuals between data and models often display much sharper distributions, sometimes better described by a Laplace distribution. In the present study, we make the case that such non-Gaussian behaviors are very likely the result of what is known as mixture of distributions in the statistical literature. Mixtures arise as soon as the data do not follow a common distribution or are not properly normalized, the resulting global distribution being a mix of the various distributions followed by subsets of the data, or even individual datum. We provide examples of the way such mixtures can lead to distributions that are much sharper than Gaussian distributions and discuss the reasons why such mixtures are likely the cause of the non-Gaussian distributions observed in geomagnetism. We also show that when properly selecting sub-datasets based on geophysical criteria, statistical mixture can sometimes be avoided and much more Gaussian behaviors recovered. We conclude with some general recommendations and point out that although statistical mixture always tends to sharpen the resulting distribution, it does not necessarily lead to a Laplacian distribution. This needs to be taken into account when dealing with such non-Gaussian distributions.
Rational-operator-based depth-from-defocus approach to scene reconstruction.
Li, Ang; Staunton, Richard; Tjahjadi, Tardi
2013-09-01
This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.
Aberration analysis and calculation in system of Gaussian beam illuminates lenslet array
NASA Astrophysics Data System (ADS)
Zhao, Zhu; Hui, Mei; Zhou, Ping; Su, Tianquan; Feng, Yun; Zhao, Yuejin
2014-09-01
Low order aberration was founded when focused Gaussian beam imaging at Kodak KAI -16000 image detector, which is integrated with lenslet array. Effect of focused Gaussian beam and numerical simulation calculation of the aberration were presented in this paper. First, we set up a model of optical imaging system based on previous experiment. Focused Gaussian beam passed through a pinhole and was received by Kodak KAI -16000 image detector whose microlenses of lenslet array were exactly focused on sensor surface. Then, we illustrated the characteristics of focused Gaussian beam and the effect of relative space position relations between waist of Gaussian beam and front spherical surface of microlenses to the aberration. Finally, we analyzed the main element of low order aberration and calculated the spherical aberration caused by lenslet array according to the results of above two steps. Our theoretical calculations shown that , the numerical simulation had a good agreement with the experimental result. Our research results proved that spherical aberration was the main element and made up about 93.44% of the 48 nm error, which was demonstrated in previous experiment. The spherical aberration is inversely proportional to the value of divergence distance between microlens and waist, and directly proportional to the value of the Gaussian beam waist radius.
A median-Gaussian filtering framework for Moiré pattern noise removal from X-ray microscopy image.
Wei, Zhouping; Wang, Jian; Nichol, Helen; Wiebe, Sheldon; Chapman, Dean
2012-02-01
Moiré pattern noise in Scanning Transmission X-ray Microscopy (STXM) imaging introduces significant errors in qualitative and quantitative image analysis. Due to the complex origin of the noise, it is difficult to avoid Moiré pattern noise during the image data acquisition stage. In this paper, we introduce a post-processing method for filtering Moiré pattern noise from STXM images. This method includes a semi-automatic detection of the spectral peaks in the Fourier amplitude spectrum by using a local median filter, and elimination of the spectral noise peaks using a Gaussian notch filter. The proposed median-Gaussian filtering framework shows good results for STXM images with the size of power of two, if such parameters as threshold, sizes of the median and Gaussian filters, and size of the low frequency window, have been properly selected. Copyright © 2011 Elsevier Ltd. All rights reserved.
A new method for the identification of non-Gaussian line profiles in elliptical galaxies
NASA Technical Reports Server (NTRS)
Van Der Marel, Roeland P.; Franx, Marijn
1993-01-01
A new parameterization for the line profiles of elliptical galaxies, the Gauss-Hermite series, is proposed. This approach expands the line profile as a sum of orthogonal functions which minimizes the correlations between the errors in the parameters of the fit. This method also make use of the fact that Gaussians provide good low-order fits to observed line profiles. The method yields measurements of the line strength, mean radial velocity, and the velocity dispersion as well as two extra parameters, h3 and h4, that measure asymmetric and symmetric deviations of the line profiles from a Gaussian, respectively. The new method was used to derive profiles for three elliptical galaxies which all have asymmetric line profiles on the major axis with symmetric deviations from a Gaussian. Results confirm that elliptical galaxies have complex structures due to their complex formation history.
Novel transform for image description and compression with implementation by neural architectures
NASA Astrophysics Data System (ADS)
Ben-Arie, Jezekiel; Rao, Raghunath K.
1991-10-01
A general method for signal representation using nonorthogonal basis functions that are composed of Gaussians are described. The Gaussians can be combined into groups with predetermined configuration that can approximate any desired basis function. The same configuration at different scales forms a set of self-similar wavelets. The general scheme is demonstrated by representing a natural signal employing an arbitrary basis function. The basic methodology is demonstrated by two novel schemes for efficient representation of 1-D and 2- D signals using Gaussian basis functions (BFs). Special methods are required here since the Gaussian functions are nonorthogonal. The first method employs a paradigm of maximum energy reduction interlaced with the A* heuristic search. The second method uses an adaptive lattice system to find the minimum-squared error of the BFs onto the signal, and a lateral-vertical suppression network to select the most efficient representation in terms of data compression.
ERIC Educational Resources Information Center
Waugh, Rebecca E.
2010-01-01
Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…
ERIC Educational Resources Information Center
Waugh, Rebecca E.; Alberto, Paul A.; Fredrick, Laura D.
2011-01-01
Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…
Cosmological information in Gaussianized weak lensing signals
NASA Astrophysics Data System (ADS)
Joachimi, B.; Taylor, A. N.; Kiessling, A.
2011-11-01
Gaussianizing the one-point distribution of the weak gravitational lensing convergence has recently been shown to increase the signal-to-noise ratio contained in two-point statistics. We investigate the information on cosmology that can be extracted from the transformed convergence fields. Employing Box-Cox transformations to determine optimal transformations to Gaussianity, we develop analytical models for the transformed power spectrum, including effects of noise and smoothing. We find that optimized Box-Cox transformations perform substantially better than an offset logarithmic transformation in Gaussianizing the convergence, but both yield very similar results for the signal-to-noise ratio. None of the transformations is capable of eliminating correlations of the power spectra between different angular frequencies, which we demonstrate to have a significant impact on the errors in cosmology. Analytic models of the Gaussianized power spectrum yield good fits to the simulations and produce unbiased parameter estimates in the majority of cases, where the exceptions can be traced back to the limitations in modelling the higher order correlations of the original convergence. In the ideal case, without galaxy shape noise, we find an increase in the cumulative signal-to-noise ratio by a factor of 2.6 for angular frequencies up to ℓ= 1500, and a decrease in the area of the confidence region in the Ωm-σ8 plane, measured in terms of q-values, by a factor of 4.4 for the best performing transformation. When adding a realistic level of shape noise, all transformations perform poorly with little decorrelation of angular frequencies, a maximum increase in signal-to-noise ratio of 34 per cent, and even slightly degraded errors on cosmological parameters. We argue that to find Gaussianizing transformations of practical use, it will be necessary to go beyond transformations of the one-point distribution of the convergence, extend the analysis deeper into the non-linear regime and resort to an exploration of parameter space via simulations.
Some Modified Integrated Squared Error Procedures for Multivariate Normal Data.
1982-06-01
p-dimensional Gaussian. There are a number of measures of qualitative robustness but the most important is the influence function . Most of the other...measures are derived from the influence function . The influence function is simply proportional to the score function (Huber, 1981, p. 45 ). The... influence function at the p-variate Gaussian distribution Np (UV) is as -1P IC(x; ,N) = IE&) ;-") sD=XV = (I+c) (p+2)(x-p) exp(- ! (x-p) TV-.1-)) (3.6
Comparison of Gaussian and non-Gaussian Atmospheric Profile Retrievals from Satellite Microwave Data
NASA Astrophysics Data System (ADS)
Kliewer, A.; Forsythe, J. M.; Fletcher, S. J.; Jones, A. S.
2017-12-01
The Cooperative Institute for Research in the Atmosphere at Colorado State University has recently developed two different versions of a mixed-distribution (lognormal combined with a Gaussian) based microwave temperature and mixing ratio retrieval system as well as the original Gaussian-based approach. These retrieval systems are based upon 1DVAR theory but have been adapted to use different descriptive statistics of the lognormal distribution to minimize the background errors. The input radiance data is from the AMSU-A and MHS instruments on the NOAA series of spacecraft. To help illustrate how the three retrievals are affected by the change in the distribution we are in the process of creating a new website to show the output from the different retrievals. Here we present initial results from different dynamical situations to show how the tool could be used by forecasters as well as for educators. However, as the new retrieved values are from a non-Gaussian based 1DVAR then they will display non-Gaussian behaviors that need to pass a quality control measure that is consistent with this distribution, and these new measures are presented here along with initial results for checking the retrievals.
The impact of non-Gaussianity upon cosmological forecasts
NASA Astrophysics Data System (ADS)
Repp, A.; Szapudi, I.; Carron, J.; Wolk, M.
2015-12-01
The primary science driver for 3D galaxy surveys is their potential to constrain cosmological parameters. Forecasts of these surveys' effectiveness typically assume Gaussian statistics for the underlying matter density, despite the fact that the actual distribution is decidedly non-Gaussian. To quantify the effect of this assumption, we employ an analytic expression for the power spectrum covariance matrix to calculate the Fisher information for Baryon Acoustic Oscillation (BAO)-type model surveys. We find that for typical number densities, at kmax = 0.5h Mpc-1, Gaussian assumptions significantly overestimate the information on all parameters considered, in some cases by up to an order of magnitude. However, after marginalizing over a six-parameter set, the form of the covariance matrix (dictated by N-body simulations) causes the majority of the effect to shift to the `amplitude-like' parameters, leaving the others virtually unaffected. We find that Gaussian assumptions at such wavenumbers can underestimate the dark energy parameter errors by well over 50 per cent, producing dark energy figures of merit almost three times too large. Thus, for 3D galaxy surveys probing the non-linear regime, proper consideration of non-Gaussian effects is essential.
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
The calculation of average error probability in a digital fibre optical communication system
NASA Astrophysics Data System (ADS)
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
Error Characterization of Flight Trajectories Reconstructed Using Structure from Motion
2015-03-27
adjustment using IMU rotation information, the accuracy of the yaw, pitch and roll is limited and numerical errors can be as high as 1e-4 depending on...due to either zero mean, Gaussian noise and/or bias in the IMU measured yaw, pitch and roll angles. It is possible that when errors in these...requires both the information on how the camera is mounted to the IMU /aircraft and the measured yaw, pitch and roll at the time of the first image
NASA Astrophysics Data System (ADS)
Xiong, B.; Oude Elberink, S.; Vosselman, G.
2014-07-01
In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.
Weak constrained localized ensemble transform Kalman filter for radar data assimilation
NASA Astrophysics Data System (ADS)
Janjic, Tijana; Lange, Heiner
2015-04-01
The applications on convective scales require data assimilation with a numerical model with single digit horizontal resolution in km and time evolving error covariances. The ensemble Kalman filter (EnKF) algorithm incorporates these two requirements. However, some challenges for the convective scale applications remain unresolved when using the EnKF approach. These include a need on convective scale to estimate fields that are nonnegative (as rain, graupel, snow) and use of data sets as radar reflectivity or cloud products that have the same property. What underlines these examples are errors that are non-Gaussian in nature causing a problem with EnKF, which uses Gaussian error assumptions to produce the estimates from the previous forecast and the incoming data. Since the proper estimates of hydrometeors are crucial for prediction on convective scales, question arises whether EnKF method can be modified to improve these estimates and whether there is a way of optimizing use of radar observations to initialize NWP models due to importance of this data set for prediction of connective storms. In order to deal with non-Gaussian errors different approaches can be taken in the EnKF framework. For example, variables can be transformed by assuming the relevant state variables follow an appropriate pre-specified non-Gaussian distribution, such as the lognormal and truncated Gaussian distribution or, more generally, by carrying out a parameterized change of state variables known as Gaussian anamorphosis. In a recent work by Janjic et al. 2014, it was shown on a simple example how conservation of mass could be beneficial for assimilation of positive variables. The method developed in the paper outperformed the EnKF as well as the EnKF with the lognormal change of variables. As argued in the paper the reason for this, is that each of these methods preserves mass (EnKF) or positivity (lognormal EnKF) but not both. Only once both positivity and mass were preserved in a new algorithm, the good estimates of the fields were obtained. The alternative to strong constraint formulation in Janjic et al. 2014 is to modify LETKF algorithm to take into the account physical properties only approximately. In this work we will include the weak constraints in the LETKF algorithm for estimation of hydrometers. The benefit on prediction is illustrated in an idealized setup (Lange and Craig, 2013). This setup uses the non hydrostatic COSMO model with a 2 km horizontal resolution, and the LETKF as implemented in KENDA (Km-scale Ensemble Data Assimilation) system of German Weather Service (Reich et al. 2011). Due to the Gaussian assumptions that underline the LETKF algorithm, the analyses of water species will become negative in some grid points of the COSMO model. These values are set to zero currently in KENDA after the LETKF analysis step. The tests done within this setup show that such a procedure introduces a bias in the analysis ensemble with respect to the true, that increases in time due to the cycled data assimilation. The benefits of including the constraints in LETKF are illustrated on the bias values during assimilation and the prediction.
Parametric Studies Of Lightweight Reflectors Supported On Linear Actuator Arrays
NASA Astrophysics Data System (ADS)
Seibert, George E.
1987-10-01
This paper presents the results of numerous design studies carried out at Perkin-Elmer in support of the design of large diameter controllable mirrors for use in laser beam control, surveillance, and astronomy programs. The results include relationships between actuator location and spacing and the associated degree of correctability attainable for a variety of faceplate configurations subjected to typical disturbance environments. Normalizations and design curves obtained from closed-form equations based on thin shallow shell theory and computer based finite-element analyses are presented for use in preliminary design estimates of actuator count, faceplate structural properties, system performance prediction and weight assessments. The results of the analyses were obtained from a very wide range of mirror configurations, including both continuous and segmented mirror geometries. Typically, the designs consisted of a thin facesheet controlled by point force actuators which in turn were mounted on a structurally efficient base panel, or "reaction structure". The faceplate materials considered were fused silica, ULE fused silica, Zerodur, aluminum and beryllium. Thin solid faceplates as well as rib-reinforced cross-sections were treated, with a wide variation in thickness and/or rib patterns. The magnitude and spatial frequency distribution of the residual or uncorrected errors were related to the input error functions for mirrors of many different diameters and focal ratios. The error functions include simple sphere-to-sphere corrections, "parabolization" of spheres, and higher spatial frequency input error maps ranging from 0.5 to 7.5 cycles per diameter. The parameter which dominates all of the results obtained to date, is a structural descriptor of thin shell behavior called the characteristic length. This parameter is a function of the shell's radius of curvature, thickness, and Poisson's ratio of the material used. The value of this constant, in itself, describes the extent to which the deflection under a point force is localized by the shell's curvature. The deflection shape is typically a near-gaussian "bump" with a zero-crossing at a local radius of approximately 3.5 characteristic lengths. The amplitude is a function of the shells elastic modulus, radius, and thickness, and is linearly proportional to the applied force. This basic shell behavior is well-treated in an excellent set of papers by Eric Reissner entitled "Stresses and Small Displacements of Shallow Spherical Shells".1'2 Building on the insight offered by these papers, we developed our design tools around two derived parameters, the ratio of the mirror's diameter to its characteristic length (D/l), and the ratio of the actuator spacing to the characteristic length (b/l). The D/1 ratio determines the "finiteness" of the shell, or its dependence on edge boundary conditions. For D/1 values greater than 10, the influence of edges is almost totally absent on interior behavior. The b/1 ratio, the basis of all our normalizations is the most universal term in the description of correctability or ratio of residual/input errors. The data presented in the paper, shows that the rms residual error divided by the peak amplitude of the input error function is related to the actuator spacing to characteristic length ratio by the following expression RMS Residual Error b 3.5 k (I) (1) Initial Error Ampl. The value of k ranges from approximately 0.001 for low spatial frequency initial errors up to 0.05 for higher error frequencies (e.g. 5 cycles/diameter). The studies also yielded insight to the forces required to produce typical corrections at both the center and edges of the mirror panels. Additionally, the data lends itself to rapid evaluation of the effects of trading faceplate weight for increased actuator count,
Jones, Kevin C; Seghal, Chandra M; Avery, Stephen
2016-03-21
The unique dose deposition of proton beams generates a distinctive thermoacoustic (protoacoustic) signal, which can be used to calculate the proton range. To identify the expected protoacoustic amplitude, frequency, and arrival time for different proton pulse characteristics encountered at hospital-based proton sources, the protoacoustic pressure emissions generated by 150 MeV, pencil-beam proton pulses were simulated in a homogeneous water medium. Proton pulses with Gaussian widths ranging up to 200 μs were considered. The protoacoustic amplitude, frequency, and time-of-flight (TOF) range accuracy were assessed. For TOF calculations, the acoustic pulse arrival time was determined based on multiple features of the wave. Based on the simulations, Gaussian proton pulses can be categorized as Dirac-delta-function-like (FWHM < 4 μs) and longer. For the δ-function-like irradiation, the protoacoustic spectrum peaks at 44.5 kHz and the systematic error in determining the Bragg peak range is <2.6 mm. For longer proton pulses, the spectrum shifts to lower frequencies, and the range calculation systematic error increases (⩽ 23 mm for FWHM of 56 μs). By mapping the protoacoustic peak arrival time to range with simulations, the residual error can be reduced. Using a proton pulse with FWHM = 2 μs results in a maximum signal-to-noise ratio per total dose. Simulations predict that a 300 nA, 150 MeV, FWHM = 4 μs Gaussian proton pulse (8.0 × 10(6) protons, 3.1 cGy dose at the Bragg peak) will generate a 146 mPa pressure wave at 5 cm beyond the Bragg peak. There is an angle dependent systematic error in the protoacoustic TOF range calculations. Placing detectors along the proton beam axis and beyond the Bragg peak minimizes this error. For clinical proton beams, protoacoustic detectors should be sensitive to <400 kHz (for -20 dB). Hospital-based synchrocyclotrons and cyclotrons are promising sources of proton pulses for generating clinically measurable protoacoustic emissions.
A Gaussian method to improve work-of-breathing calculations.
Petrini, M F; Evans, J N; Wall, M A; Norman, J R
1995-01-01
The work of breathing is a calculated index of pulmonary function in ventilated patients that may be useful in deciding when to wean and when to extubate. However, the accuracy of the calculated work of breathing of the patient (WOBp) can suffer from artifacts introduced by coughing, swallowing, and other non-breathing maneuvers. The WOBp in this case will include not only the usual work of inspiration, but also the work of performing these non-breathing maneuvers. The authors developed a method to objectively eliminate the calculated work of these movements from the work of breathing, based on fitting to a Gaussian curve the variable P, which is obtained from the difference between the esophageal pressure change and the airway pressure change during each breath. In spontaneously breathing adults the normal breaths fit the Gaussian curve, while breaths that contain non-breathing maneuvers do not. In this Gaussian breath-elimination method (GM), breaths that are two standard deviations from that mean obtained by the fit are eliminated. For normally breathing control adult subjects, GM had little effect on WOBp, reducing it from 0.49 to 0.47 J/L (n = 8), while there was a 40% reduction in the coefficient of variation. Non-breathing maneuvers were simulated by coughing, which increased WOBp to 0.88 (n = 6); with the GM correction, WOBp was 0.50 J/L, a value not significantly different from that of normal breathing. Occlusion also increased WOBp to 0.60 J/L, but GM-corrected WOBp was 0.51 J/L, a normal value. As predicted, doubling the respiratory rate did not change the WOBp before or after the GM correction.(ABSTRACT TRUNCATED AT 250 WORDS)
Bijsterbosch, Janine D; Lee, Kwang-Hyuk; Hunter, Michael D; Tsoi, Daniel T; Lankappa, Sudheer; Wilkinson, Iain D; Barker, Anthony T; Woodruff, Peter W R
2011-05-01
Our ability to interact physically with objects in the external world critically depends on temporal coupling between perception and movement (sensorimotor timing) and swift behavioral adjustment to changes in the environment (error correction). In this study, we investigated the neural correlates of the correction of subliminal and supraliminal phase shifts during a sensorimotor synchronization task. In particular, we focused on the role of the cerebellum because this structure has been shown to play a role in both motor timing and error correction. Experiment 1 used fMRI to show that the right cerebellar dentate nucleus and primary motor and sensory cortices were activated during regular timing and during the correction of subliminal errors. The correction of supraliminal phase shifts led to additional activations in the left cerebellum and right inferior parietal and frontal areas. Furthermore, a psychophysiological interaction analysis revealed that supraliminal error correction was associated with enhanced connectivity of the left cerebellum with frontal, auditory, and sensory cortices and with the right cerebellum. Experiment 2 showed that suppression of the left but not the right cerebellum with theta burst TMS significantly affected supraliminal error correction. These findings provide evidence that the left lateral cerebellum is essential for supraliminal error correction during sensorimotor synchronization.
Error Detection/Correction in Collaborative Writing
ERIC Educational Resources Information Center
Pilotti, Maura; Chodorow, Martin
2009-01-01
In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…
Joint Schemes for Physical Layer Security and Error Correction
ERIC Educational Resources Information Center
Adamo, Oluwayomi
2011-01-01
The major challenges facing resource constraint wireless devices are error resilience, security and speed. Three joint schemes are presented in this research which could be broadly divided into error correction based and cipher based. The error correction based ciphers take advantage of the properties of LDPC codes and Nordstrom Robinson code. A…
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
NASA Astrophysics Data System (ADS)
Dwivedi, Prashant Povel; Kumar, Challa Sesha Sai Pavan; Choi, Hee Joo; Cha, Myoungsik
2016-02-01
Random duty-cycle error (RDE) is inherent in the fabrication of ferroelectric quasi-phase-matching (QPM) gratings. Although a small RDE may not affect the nonlinearity of QPM devices, it enhances non-phase-matched parasitic harmonic generations, limiting the device performance in some applications. Recently, we demonstrated a simple method for measuring the RDE in QPM gratings by analyzing the far-field diffraction pattern obtained by uniform illumination (Dwivedi et al. in Opt Express 21:30221-30226, 2013). In the present study, we used a Gaussian beam illumination for the diffraction experiment to measure noise spectra that are less affected by the pedestals of the strong diffraction orders. Our results were compared with our calculations based on a random grating model, demonstrating improved resolution in the RDE estimation.
Reed-Solomon error-correction as a software patch mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pendley, Kevin D.
This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.
76 FR 44010 - Medicare Program; Hospice Wage Index for Fiscal Year 2012; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
.... 93.774, Medicare-- Supplementary Medical Insurance Program) Dated: July 15, 2011. Dawn L. Smalls... corrects technical errors that appeared in the notice of CMS ruling published in the Federal Register on... FR 26731), there were technical errors that are identified and corrected in the Correction of Errors...
Frequency of under-corrected refractive errors in elderly Chinese in Beijing.
Xu, Liang; Li, Jianjun; Cui, Tongtong; Tong, Zhongbiao; Fan, Guizhi; Yang, Hua; Sun, Baochen; Zheng, Yuanyuan; Jonas, Jost B
2006-07-01
The aim of the study was to evaluate the prevalence of under-corrected refractive error among elderly Chinese in the Beijing area. The population-based, cross-sectional, cohort study comprised 4,439 subjects out of 5,324 subjects asked to participate (response rate 83.4%) with an age of 40+ years. It was divided into a rural part [1,973 (44.4%) subjects] and an urban part [2,466 (55.6%) subjects]. Habitual and best-corrected visual acuity was measured. Under-corrected refractive error was defined as an improvement in visual acuity of the better eye of at least two lines with best possible refractive correction. The rate of under-corrected refractive error was 19.4% (95% confidence interval, 18.2, 20.6). In a multiple regression analysis, prevalence and size of under-corrected refractive error in the better eye was significantly associated with lower level of education (P<0.001), female gender (P<0.001), and age (P=0.001). Under-correction of refractive error is relatively common among elderly Chinese in the Beijing area when compared with data from other populations.
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
Viallon, Magalie; Terraz, Sylvain; Roland, Joerg; Dumont, Erik; Becker, Christoph D; Salomir, Rares
2010-04-01
MR thermometry based on the proton resonance frequency shift (PRFS) is the most commonly used method for the monitoring of thermal therapies. As the chemical shift of water protons is temperature dependent, the local temperature variation (relative to an initial baseline) may be calculated from time-dependent phase changes in gradient-echo (GRE) MR images. Dynamic phase shift in GRE images is also produced by time-dependent changes in the magnetic bulk susceptibility of tissue. Gas bubbles (known as "white cavitation") are frequently visualized near the RF electrode in ultrasonography-guided radio frequency ablation (RFA). This study aimed to investigate RFA-induced cavitation's effects by using simultaneous ultrasonography and MRI, to both visualize the cavitation and quantify the subsequent magnetic susceptibility-mediated errors in concurrent PRFS MR-thermometry (MRT) as well as to propose a first-order correction for the latter errors. RF heating in saline gels and in ex vivo tissues was performed with MR-compatible bipolar and monopolar electrodes inside a 1.5 T MR clinical scanner. Ultrasonography simultaneous to PRFS MRT was achieved using a MR-compatible phased-array ultrasonic transducer. PRFS MRT was performed interleaved in three orthogonal planes and compared to measurements from fluoroptic sensors, under low and, respectively, high RFA power levels. Control experiments were performed to isolate the main source of errors in standard PRFS thermometry. Ultrasonography, MRI and digital camera pictures clearly demonstrated generation of bubbles every time when operating the radio frequency equipment at therapeutic powers (> or = 30 W). Simultaneous bimodal (ultrasonography and MRI) monitoring of high power RF heating demonstrated a correlation between the onset of the PRFS-thermometry errors and the appearance of bubbles around the applicator. In an ex vivo study using a bipolar RF electrode under low power level (5 W), the MR measured temperature curves accurately matched the reference fluoroptic data. In similar ex vivo studies when applying higher RFA power levels (30 W), the correlation plots of MR thermometry versus fluoroptic data showed large errors in PRFS-derived temperature (up to 45 degrees C absolute deviation, positive or negative) depending not only on fluoroptic tip position but also on the RF electrode orientation relative to the B0 axis. Regions with apparent decrease in the PRFS-derived temperature maps as much as 30 degrees C below the initial baseline were visualized during RFA high power application. Ex vivo data were corrected assuming a Gaussian dynamic source of susceptibility, centered in the anode/cathode gap of the RF bipolar electrode. After correction, the temperature maps recovered the revolution symmetry pattern predicted by theory and matched the fluoroptic data within 4.5 degrees C mean offset. RFA induces dynamic changes in magnetic bulk susceptibility in biological tissue, resulting in large and spatially dependent errors of phase-subtraction-only PRFS MRT and unexploitable thermal dose maps. These thermometry artifacts were strongly correlated with the appearance of transient cavitation. A first-order dynamic model of susceptibility provided a useful method for minimizing these artifacts in phantom and ex vivo experiments.
Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.
2018-01-01
Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918
Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A
2018-04-01
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
Parameter estimation for slit-type scanning sensors
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Rolfe, E. G.
1981-01-01
The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.
Absolute judgment for one- and two-dimensional stimuli embedded in Gaussian noise
NASA Technical Reports Server (NTRS)
Kvalseth, T. O.
1977-01-01
This study examines the effect on human performance of adding Gaussian noise or disturbance to the stimuli in absolute judgment tasks involving both one- and two-dimensional stimuli. For each selected stimulus value (both an X-value and a Y-value were generated in the two-dimensional case), 10 values (or 10 pairs of values in the two-dimensional case) were generated from a zero-mean Gaussian variate, added to the selected stimulus value and then served as the coordinate values for the 10 points that were displayed sequentially on a CRT. The results show that human performance, in terms of the information transmitted and rms error as functions of stimulus uncertainty, was significantly reduced as the noise variance increased.
Detecting Non-Gaussian and Lognormal Characteristics of Temperature and Water Vapor Mixing Ratio
NASA Astrophysics Data System (ADS)
Kliewer, A.; Fletcher, S. J.; Jones, A. S.; Forsythe, J. M.
2017-12-01
Many operational data assimilation and retrieval systems assume that the errors and variables come from a Gaussian distribution. This study builds upon previous results that shows that positive definite variables, specifically water vapor mixing ratio and temperature, can follow a non-Gaussian distribution and moreover a lognormal distribution. Previously, statistical testing procedures which included the Jarque-Bera test, the Shapiro-Wilk test, the Chi-squared goodness-of-fit test, and a composite test which incorporated the results of the former tests were employed to determine locations and time spans where atmospheric variables assume a non-Gaussian distribution. These tests are now investigated in a "sliding window" fashion in order to extend the testing procedure to near real-time. The analyzed 1-degree resolution data comes from the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) six hour forecast from the 0Z analysis. These results indicate the necessity of a Data Assimilation (DA) system to be able to properly use the lognormally-distributed variables in an appropriate Bayesian analysis that does not assume the variables are Gaussian.
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Adaptive control for accelerators
Eaton, Lawrie E.; Jachim, Stephen P.; Natter, Eckard F.
1991-01-01
An adaptive feedforward control loop is provided to stabilize accelerator beam loading of the radio frequency field in an accelerator cavity during successive pulses of the beam into the cavity. A digital signal processor enables an adaptive algorithm to generate a feedforward error correcting signal functionally determined by the feedback error obtained by a beam pulse loading the cavity after the previous correcting signal was applied to the cavity. Each cavity feedforward correcting signal is successively stored in the digital processor and modified by the feedback error resulting from its application to generate the next feedforward error correcting signal. A feedforward error correcting signal is generated by the digital processor in advance of the beam pulse to enable a composite correcting signal and the beam pulse to arrive concurrently at the cavity.
A systematic comparison of error correction enzymes by next-generation sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
A systematic comparison of error correction enzymes by next-generation sequencing
Lubock, Nathan B.; Zhang, Di; Sidore, Angus M.; ...
2017-08-01
Gene synthesis, the process of assembling genelength fragments from shorter groups of oligonucleotides (oligos), is becoming an increasingly important tool in molecular and synthetic biology. The length, quality and cost of gene synthesis are limited by errors produced during oligo synthesis and subsequent assembly. Enzymatic error correction methods are cost-effective means to ameliorate errors in gene synthesis. Previous analyses of these methods relied on cloning and Sanger sequencing to evaluate their efficiencies, limiting quantitative assessment. Here, we develop a method to quantify errors in synthetic DNA by next-generation sequencing. We analyzed errors in model gene assemblies and systematically compared sixmore » different error correction enzymes across 11 conditions. We find that ErrASE and T7 Endonuclease I are the most effective at decreasing average error rates (up to 5.8-fold relative to the input), whereas MutS is the best for increasing the number of perfect assemblies (up to 25.2-fold). We are able to quantify differential specificities such as ErrASE preferentially corrects C/G transversions whereas T7 Endonuclease I preferentially corrects A/T transversions. More generally, this experimental and computational pipeline is a fast, scalable and extensible way to analyze errors in gene assemblies, to profile error correction methods, and to benchmark DNA synthesis methods.« less
Error detection and correction unit with built-in self-test capability for spacecraft applications
NASA Technical Reports Server (NTRS)
Timoc, Constantin
1990-01-01
The objective of this project was to research and develop a 32-bit single chip Error Detection and Correction unit capable of correcting all single bit errors and detecting all double bit errors in the memory systems of a spacecraft. We designed the 32-bit EDAC (Error Detection and Correction unit) based on a modified Hamming code and according to the design specifications and performance requirements. We constructed a laboratory prototype (breadboard) which was converted into a fault simulator. The correctness of the design was verified on the breadboard using an exhaustive set of test cases. A logic diagram of the EDAC was delivered to JPL Section 514 on 4 Oct. 1988.
The Topology of Large-Scale Structure in the 1.2 Jy IRAS Redshift Survey
NASA Astrophysics Data System (ADS)
Protogeros, Zacharias A. M.; Weinberg, David H.
1997-11-01
We measure the topology (genus) of isodensity contour surfaces in volume-limited subsets of the 1.2 Jy IRAS redshift survey, for smoothing scales λ = 4, 7, and 12 h-1 Mpc. At 12 h-1 Mpc, the observed genus curve has a symmetric form similar to that predicted for a Gaussian random field. At the shorter smoothing lengths, the observed genus curve shows a modest shift in the direction of an isolated cluster or ``meatball'' topology. We use mock catalogs drawn from cosmological N-body simulations to investigate the systematic biases that affect topology measurements in samples of this size and to determine the full covariance matrix of the expected random errors. We incorporate the error correlations into our evaluations of theoretical models, obtaining both frequentist assessments of absolute goodness of fit and Bayesian assessments of models' relative likelihoods. We compare the observed topology of the 1.2 Jy survey to the predictions of dynamically evolved, unbiased, gravitational instability models that have Gaussian initial conditions. The model with an n = -1 power-law initial power spectrum achieves the best overall agreement with the data, though models with a low-density cold dark matter power spectrum and an n = 0 power-law spectrum are also consistent. The observed topology is inconsistent with an initially Gaussian model that has n = -2, and it is strongly inconsistent with a Voronoi foam model, which has a non-Gaussian, bubble topology.
Kumar, Anil; Adhikary, Amitava; Shamoun, Lance; Sevilla, Michael D
2016-03-10
The solvated electron (e(aq)⁻) is a primary intermediate after an ionization event that produces reductive DNA damage. Accurate standard redox potentials (E(o)) of nucleobases and of e(aq)⁻ determine the extent of reaction of e(aq)⁻ with nucleobases. In this work, E(o) values of e(aq)⁻ and of nucleobases have been calculated employing the accurate ab initio Gaussian 4 theory including the polarizable continuum model (PCM). The Gaussian 4-calculated E(o) of e(aq)⁻ (-2.86 V) is in excellent agreement with the experimental one (-2.87 V). The Gaussian 4-calculated E(o) of nucleobases in dimethylformamide (DMF) lie in the range (-2.36 V to -2.86 V); they are in reasonable agreement with the experimental E(o) in DMF and have a mean unsigned error (MUE) = 0.22 V. However, inclusion of specific water molecules reduces this error significantly (MUE = 0.07). With the use of a model of e(aq)⁻ nucleobase complex with six water molecules, the reaction of e(aq)⁻ with the adjacent nucleobase is investigated using approximate ab initio molecular dynamics (MD) simulations including PCM. Our MD simulations show that e(aq)⁻ transfers to uracil, thymine, cytosine, and adenine, within 10 to 120 fs and e(aq)⁻ reacts with guanine only when a water molecule forms a hydrogen bond to O6 of guanine which stabilizes the anion radical.
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandel, Kaisey S.; Kirshner, Robert P.; Foley, Ryan J., E-mail: kmandel@cfa.harvard.edu
2014-12-20
We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II λ6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocitymore » (NV) supernovae exhibit significant discrepancies for B – V and B – R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B – V and B – R color differences between HV and NV groups are 0.06 ± 0.02 and 0.09 ± 0.02 mag, respectively. A linear model finds significant slopes of –0.021 ± 0.006 and –0.030 ± 0.009 mag (10{sup 3} km s{sup –1}){sup –1} for intrinsic B – V and B – R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A{sub V} extinction estimates as large as –0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances.« less
How EFL Students Can Use Google to Correct Their "Untreatable" Written Errors
ERIC Educational Resources Information Center
Geiller, Luc
2014-01-01
This paper presents the findings of an experiment in which a group of 17 French post-secondary EFL learners used Google to self-correct several "untreatable" written errors. Whether or not error correction leads to improved writing has been much debated, some researchers dismissing it is as useless and others arguing that error feedback…
Critical Neural Substrates for Correcting Unexpected Trajectory Errors and Learning from Them
ERIC Educational Resources Information Center
Mutha, Pratik K.; Sainburg, Robert L.; Haaland, Kathleen Y.
2011-01-01
Our proficiency at any skill is critically dependent on the ability to monitor our performance, correct errors and adapt subsequent movements so that errors are avoided in the future. In this study, we aimed to dissociate the neural substrates critical for correcting unexpected trajectory errors and learning to adapt future movements based on…
ERIC Educational Resources Information Center
Nicewander, W. Alan
2018-01-01
Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…
From plane waves to local Gaussians for the simulation of correlated periodic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, George H., E-mail: george.booth@kcl.ac.uk; Tsatsoulis, Theodoros; Grüneis, Andreas, E-mail: a.grueneis@fkf.mpg.de
2016-08-28
We present a simple, robust, and black-box approach to the implementation and use of local, periodic, atom-centered Gaussian basis functions within a plane wave code, in a computationally efficient manner. The procedure outlined is based on the representation of the Gaussians within a finite bandwidth by their underlying plane wave coefficients. The core region is handled within the projected augment wave framework, by pseudizing the Gaussian functions within a cutoff radius around each nucleus, smoothing the functions so that they are faithfully represented by a plane wave basis with only moderate kinetic energy cutoff. To mitigate the effects of themore » basis set superposition error and incompleteness at the mean-field level introduced by the Gaussian basis, we also propose a hybrid approach, whereby the complete occupied space is first converged within a large plane wave basis, and the Gaussian basis used to construct a complementary virtual space for the application of correlated methods. We demonstrate that these pseudized Gaussians yield compact and systematically improvable spaces with an accuracy comparable to their non-pseudized Gaussian counterparts. A key advantage of the described method is its ability to efficiently capture and describe electronic correlation effects of weakly bound and low-dimensional systems, where plane waves are not sufficiently compact or able to be truncated without unphysical artifacts. We investigate the accuracy of the pseudized Gaussians for the water dimer interaction, neon solid, and water adsorption on a LiH surface, at the level of second-order Møller–Plesset perturbation theory.« less
A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu
2017-02-01
Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement ofmore » path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.« less
The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm
NASA Astrophysics Data System (ADS)
Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero
1999-10-01
We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.
Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank
2015-01-01
Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s
Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers
NASA Technical Reports Server (NTRS)
Ha, Eunho; North, Gerald R.
1995-01-01
Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.
NASA Astrophysics Data System (ADS)
Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said
2018-06-01
Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.
A modified adjoint-based grid adaptation and error correction method for unstructured grid
NASA Astrophysics Data System (ADS)
Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi
2018-05-01
Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.
Kuwabara, Masaru; Mansouri, Farshad A.; Buckley, Mark J.
2014-01-01
Monkeys were trained to select one of three targets by matching in color or matching in shape to a sample. Because the matching rule frequently changed and there were no cues for the currently relevant rule, monkeys had to maintain the relevant rule in working memory to select the correct target. We found that monkeys' error commission was not limited to the period after the rule change and occasionally occurred even after several consecutive correct trials, indicating that the task was cognitively demanding. In trials immediately after such error trials, monkeys' speed of selecting targets was slower. Additionally, in trials following consecutive correct trials, the monkeys' target selections for erroneous responses were slower than those for correct responses. We further found evidence for the involvement of the cortex in the anterior cingulate sulcus (ACCs) in these error-related behavioral modulations. First, ACCs cell activity differed between after-error and after-correct trials. In another group of ACCs cells, the activity differed depending on whether the monkeys were making a correct or erroneous decision in target selection. Second, bilateral ACCs lesions significantly abolished the response slowing both in after-error trials and in error trials. The error likelihood in after-error trials could be inferred by the error feedback in the previous trial, whereas the likelihood of erroneous responses after consecutive correct trials could be monitored only internally. These results suggest that ACCs represent both context-dependent and internally detected error likelihoods and promote modes of response selections in situations that involve these two types of error likelihood. PMID:24872558
Efficient error correction for next-generation sequencing of viral amplicons
2012-01-01
Background Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. Results In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Conclusions Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses. The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm PMID:22759430
Efficient error correction for next-generation sequencing of viral amplicons.
Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury
2012-06-25
Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Micheyl, Christophe; Dai, Huanping
2010-01-01
The equal-variance Gaussian signal-detection-theory (SDT) decision model for the dual-pair change-detection (or “4IAX”) paradigm has been described in earlier publications. In this note, we consider the equal-variance Gaussian SDT model for the related dual-pair AB vs BA identification paradigm. The likelihood ratios, optimal decision rules, receiver operating characteristics (ROCs), and relationships between d' and proportion-correct (PC) are analyzed for two special cases: that of statistically independent observations, which is likely to apply in constant-stimuli experiments, and that of highly correlated observations, which is likely to apply in experiments where stimuli are roved widely across trials or pairs. A surprising outcome of this analysis is that although these two situations lead to different optimal decision rules, the predicted ROCs and proportions of correct responses (PCs) for these two cases are not substantially different, and are either identical or similar to those observed in the basic Yes-No paradigm. PMID:19633356
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kutzler, F.W.; Painter, G.S.
1992-02-15
A fully self-consistent series of nonlocal (gradient) density-functional calculations has been carried out using the augmented-Gaussian-orbital method to determine the magnitude of gradient corrections to the potential-energy curves of the first-row diatomics, Li{sub 2} through F{sub 2}. Both the Langreth-Mehl-Hu and the Perdew-Wang gradient-density functionals were used in calculations of the binding energy, bond length, and vibrational frequency for each dimer. Comparison with results obtained in the local-spin-density approximation (LSDA) using the Vosko-Wilk-Nusair functional, and with experiment, reveals that bond lengths and vibrational frequencies are rather insensitive to details of the gradient functionals, including self-consistency effects, but the gradient correctionsmore » reduce the overbinding commonly observed in the LSDA calculations of first-row diatomics (with the exception of Li{sub 2}, the gradient-functional binding-energy error is only 50--12 % of the LSDA error). The improved binding energies result from a large differential energy lowering, which occurs in open-shell atoms relative to the diatomics. The stabilization of the atom arises from the use of nonspherical charge and spin densities in the gradient-functional calculations. This stabilization is negligibly small in LSDA calculations performed with nonspherical densities.« less
Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis
NASA Technical Reports Server (NTRS)
Han, LI
1995-01-01
The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.
NASA Astrophysics Data System (ADS)
Troncossi, M.; Di Sante, R.; Rivola, A.
2016-10-01
In the field of vibration qualification testing, random excitations are typically imposed on the tested system in terms of a power spectral density (PSD) profile. This is the one of the most popular ways to control the shaker or slip table for durability tests. However, these excitations (and the corresponding system responses) exhibit a Gaussian probability distribution, whereas not all real-life excitations are Gaussian, causing the response to be also non-Gaussian. In order to introduce non-Gaussian peaks, a further parameter, i.e., kurtosis, has to be controlled in addition to the PSD. However, depending on the specimen behaviour and input signal characteristics, the use of non-Gaussian excitations with high kurtosis and a given PSD does not automatically imply a non-Gaussian stress response. For an experimental investigation of these coupled features, suitable measurement methods need to be developed in order to estimate the stress amplitude response at critical failure locations and consequently evaluate the input signals most representative for real-life, non-Gaussian excitations. In this paper, a simple test rig with a notched cantilevered specimen was developed to measure the response and examine the kurtosis values in the case of stationary Gaussian, stationary non-Gaussian, and burst non-Gaussian excitation signals. The laser Doppler vibrometry technique was used in this type of test for the first time, in order to estimate the specimen stress amplitude response as proportional to the differential displacement measured at the notch section ends. A method based on the use of measurements using accelerometers to correct for the occasional signal dropouts occurring during the experiment is described. The results demonstrate the ability of the test procedure to evaluate the output signal features and therefore to select the most appropriate input signal for the fatigue test.
DNA assembly with error correction on a droplet digital microfluidics platform.
Khilko, Yuliya; Weyman, Philip D; Glass, John I; Adams, Mark D; McNeil, Melanie A; Griffin, Peter B
2018-06-01
Custom synthesized DNA is in high demand for synthetic biology applications. However, current technologies to produce these sequences using assembly from DNA oligonucleotides are costly and labor-intensive. The automation and reduced sample volumes afforded by microfluidic technologies could significantly decrease materials and labor costs associated with DNA synthesis. The purpose of this study was to develop a gene assembly protocol utilizing a digital microfluidic device. Toward this goal, we adapted bench-scale oligonucleotide assembly methods followed by enzymatic error correction to the Mondrian™ digital microfluidic platform. We optimized Gibson assembly, polymerase chain reaction (PCR), and enzymatic error correction reactions in a single protocol to assemble 12 oligonucleotides into a 339-bp double- stranded DNA sequence encoding part of the human influenza virus hemagglutinin (HA) gene. The reactions were scaled down to 0.6-1.2 μL. Initial microfluidic assembly methods were successful and had an error frequency of approximately 4 errors/kb with errors originating from the original oligonucleotide synthesis. Relative to conventional benchtop procedures, PCR optimization required additional amounts of MgCl 2 , Phusion polymerase, and PEG 8000 to achieve amplification of the assembly and error correction products. After one round of error correction, error frequency was reduced to an average of 1.8 errors kb - 1 . We demonstrated that DNA assembly from oligonucleotides and error correction could be completely automated on a digital microfluidic (DMF) platform. The results demonstrate that enzymatic reactions in droplets show a strong dependence on surface interactions, and successful on-chip implementation required supplementation with surfactants, molecular crowding agents, and an excess of enzyme. Enzymatic error correction of assembled fragments improved sequence fidelity by 2-fold, which was a significant improvement but somewhat lower than expected compared to bench-top assays, suggesting an additional capacity for optimization.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and Harold Baranger; 26. Critique of fault-tolerant quantum information processing Robert Alicki; References; Index.
New class of photonic quantum error correction codes
NASA Astrophysics Data System (ADS)
Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.
We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.
Fixing Stellarator Magnetic Surfaces
NASA Astrophysics Data System (ADS)
Hanson, James D.
1999-11-01
Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
ERIC Educational Resources Information Center
Sampson, Andrew
2012-01-01
This paper reports on a small-scale study into the effects of uncoded correction (writing the correct forms above each error) and coded annotations (writing symbols that encourage learners to self-correct) on Colombian university-level EFL learners' written work. The study finds that while both coded annotations and uncoded correction appear to…
Bulk locality and quantum error correction in AdS/CFT
NASA Astrophysics Data System (ADS)
Almheiri, Ahmed; Dong, Xi; Harlow, Daniel
2015-04-01
We point out a connection between the emergence of bulk locality in AdS/CFT and the theory of quantum error correction. Bulk notions such as Bogoliubov transformations, location in the radial direction, and the holographic entropy bound all have natural CFT interpretations in the language of quantum error correction. We also show that the question of whether bulk operator reconstruction works only in the causal wedge or all the way to the extremal surface is related to the question of whether or not the quantum error correcting code realized by AdS/CFT is also a "quantum secret sharing scheme", and suggest a tensor network calculation that may settle the issue. Interestingly, the version of quantum error correction which is best suited to our analysis is the somewhat nonstandard "operator algebra quantum error correction" of Beny, Kempf, and Kribs. Our proposal gives a precise formulation of the idea of "subregion-subregion" duality in AdS/CFT, and clarifies the limits of its validity.
Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping
NASA Astrophysics Data System (ADS)
Piedrafita, Álvaro; Renes, Joseph M.
2017-12-01
We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data
NASA Technical Reports Server (NTRS)
Romero-Wolf, A. F.; Jacobs, C. S.
2011-01-01
The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.
Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich
2011-12-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.
Vibration Noise Modeling for Measurement While Drilling System Based on FOGs
Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu
2017-01-01
Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors’ noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn’t white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%. PMID:29039815
Vibration Noise Modeling for Measurement While Drilling System Based on FOGs.
Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu
2017-10-17
Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors' noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn't white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%.
Directionality volatility in electroencephalogram time series
NASA Astrophysics Data System (ADS)
Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.
2016-06-01
We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.
Inferring time derivatives including cell growth rates using Gaussian processes
NASA Astrophysics Data System (ADS)
Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta
2016-12-01
Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
2012-08-01
small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This in turn enables fast solution of an appropriately...implication of the compactness of the Hessian is that for small data noise and model error, the discrete Hessian can be approximated by a low-rank matrix. This...probability distribution is given by the inverse of the Hessian of the negative log likelihood function. For Gaussian data noise and model error, this
Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi
2015-05-01
Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
Piñero, David P; Camps, Vicente J; Mateo, Verónica; Ruiz-Fortes, Pedro
2012-08-01
To validate clinically in a normal healthy population an algorithm to correct the error in the keratometric estimation of corneal power based on the use of a variable keratometric index of refraction (n(k)). Medimar International Hospital (Oftalmar) and University of Alicante, Alicante, Spain. Case series. Corneal power was measured with a Scheimpflug photography-based system (Pentacam software version 1.14r01) in healthy eyes with no previous ocular surgery. In all cases, keratometric corneal power was also estimated using an adjusted value of n(k) that is dependent on the anterior corneal radius (r(1c)) as follows: n(kadj) = -0.0064286 r(1c) +1.37688. Agreement between the Gaussian (P(c)(Gauss)) and adjusted keratometric (P(kadj)) corneal power values was evaluated. The study evaluated 92 eyes (92 patients; age range 15 to 64 years). The mean difference between P(c)(Gauss) and P(kadj) was -0.02 diopter (D) ± 0.22 (SD) (P=.43). A very strong, statistically significant correlation was found between both corneal powers (r = .994, P<.01). The range of agreement between P(c)(Gauss) and P(kadj) was 0.44 D, with limits of agreement of -0.46 and +0.42 D. In addition, a very strong, statistically significant correlation of the difference between P(c)(Gauss) and P(kadj) and the posterior corneal radius was found (r = 0.96, P<.01). The imprecision in the calculation of corneal power using keratometric estimation can be minimized in clinical practice by using a variable keratometric index that depends on the radius of the anterior corneal surface. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Goos-Hänchen and Imbert-Fedorov shifts for astigmatic Gaussian beams
NASA Astrophysics Data System (ADS)
Ornigotti, Marco; Aiello, Andrea
2015-06-01
In this work we investigate the role of the beam astigmatism in the Goos-Hänchen and Imbert-Fedorov shift. As a case study, we consider a Gaussian beam focused by an astigmatic lens and we calculate explicitly the corrections to the standard formulas for beam shifts due to the astigmatism induced by the lens. Our results show that the different focusing in the longitudinal and transverse direction introduced by an astigmatic lens may enhance the angular part of the shift.
Series approximation to probability densities
NASA Astrophysics Data System (ADS)
Cohen, L.
2018-04-01
One of the historical and fundamental uses of the Edgeworth and Gram-Charlier series is to "correct" a Gaussian density when it is determined that the probability density under consideration has moments that do not correspond to the Gaussian [5, 6]. There is a fundamental difficulty with these methods in that if the series are truncated, then the resulting approximate density is not manifestly positive. The aim of this paper is to attempt to expand a probability density so that if it is truncated it will still be manifestly positive.
Parameterization of cloud lidar backscattering profiles by means of asymmetrical Gaussians
NASA Astrophysics Data System (ADS)
del Guasta, Massimo; Morandi, Marco; Stefanutti, Leopoldo
1995-06-01
A fitting procedure for cloud lidar data processing is shown that is based on the computation of the first three moments of the vertical-backscattering (or -extinction) profile. Single-peak clouds or single cloud layers are approximated to asymmetrical Gaussians. The algorithm is particularly stable with respect to noise and processing errors, and it is much faster than the equivalent least-squares approach. Multilayer clouds can easily be treated as a sum of single asymmetrical Gaussian peaks. The method is suitable for cloud-shape parametrization in noisy lidar signatures (like those expected from satellite lidars). It also permits an improvement of cloud radiative-property computations that are based on huge lidar data sets for which storage and careful examination of single lidar profiles can't be carried out.
Evaluate error correction ability of magnetorheological finishing by smoothing spectral function
NASA Astrophysics Data System (ADS)
Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin
2014-08-01
Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yanai, Takeshi; Fann, George I.; Beylkin, Gregory
Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
Dissipative quantum error correction and application to quantum sensing with trapped ions.
Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A
2017-11-28
Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
Design and implementation of an optical Gaussian noise generator
NASA Astrophysics Data System (ADS)
Za~O, Leonardo; Loss, Gustavo; Coelho, Rosângela
2009-08-01
A design of a fast and accurate optical Gaussian noise generator is proposed and demonstrated. The noise sample generation is based on the Box-Muller algorithm. The functions implementation was performed on a high-speed Altera Stratix EP1S25 field-programmable gate array (FPGA) development kit. It enabled the generation of 150 million 16-bit noise samples per second. The Gaussian noise generator required only 7.4% of the FPGA logic elements, 1.2% of the RAM memory, 0.04% of the ROM memory, and a laser source. The optical pulses were generated by a laser source externally modulated by the data bit samples using the frequency-shift keying technique. The accuracy of the noise samples was evaluated for different sequences size and confidence intervals. The noise sample pattern was validated by the Bhattacharyya distance (Bd) and the autocorrelation function. The results showed that the proposed design of the optical Gaussian noise generator is very promising to evaluate the performance of optical communications channels with very low bit-error-rate values.
ERIC Educational Resources Information Center
Munoz, Carlos A.
2011-01-01
Very often, second language (L2) writers commit the same type of errors repeatedly, despite being corrected directly or indirectly by teachers or peers (Semke, 1984; Truscott, 1996). Apart from discouraging teachers from providing error correction feedback, this also makes them hesitant as to what form of corrective feedback to adopt. Ferris…
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
Contingent negative variation (CNV) associated with sensorimotor timing error correction.
Jang, Joonyong; Jones, Myles; Milne, Elizabeth; Wilson, Daniel; Lee, Kwang-Hyuk
2016-02-15
Detection and subsequent correction of sensorimotor timing errors are fundamental to adaptive behavior. Using scalp-recorded event-related potentials (ERPs), we sought to find ERP components that are predictive of error correction performance during rhythmic movements. Healthy right-handed participants were asked to synchronize their finger taps to a regular tone sequence (every 600 ms), while EEG data were continuously recorded. Data from 15 participants were analyzed. Occasional irregularities were built into stimulus presentation timing: 90 ms before (advances: negative shift) or after (delays: positive shift) the expected time point. A tapping condition alternated with a listening condition in which identical stimulus sequence was presented but participants did not tap. Behavioral error correction was observed immediately following a shift, with a degree of over-correction with positive shifts. Our stimulus-locked ERP data analysis revealed, 1) increased auditory N1 amplitude for the positive shift condition and decreased auditory N1 modulation for the negative shift condition; and 2) a second enhanced negativity (N2) in the tapping positive condition, compared with the tapping negative condition. In response-locked epochs, we observed a CNV (contingent negative variation)-like negativity with earlier latency in the tapping negative condition compared with the tapping positive condition. This CNV-like negativity peaked at around the onset of subsequent tapping, with the earlier the peak, the better the error correction performance with the negative shifts while the later the peak, the better the error correction performance with the positive shifts. This study showed that the CNV-like negativity was associated with the error correction performance during our sensorimotor synchronization study. Auditory N1 and N2 were differentially involved in negative vs. positive error correction. However, we did not find evidence for their involvement in behavioral error correction. Overall, our study provides the basis from which further research on the role of the CNV in perceptual and motor timing can be developed. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji
2017-03-01
An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Two-dimensional computer simulation of EMVJ and grating solar cells under AMO illumination
NASA Technical Reports Server (NTRS)
Gray, J. L.; Schwartz, R. J.
1984-01-01
A computer program, SCAP2D (Solar Cell Analysis Program in 2-Dimensions), is used to evaluate the Etched Multiple Vertical Junction (EMVJ) and grating solar cells. The aim is to demonstrate how SCAP2D can be used to evaluate cell designs. The cell designs studied are by no means optimal designs. The SCAP2D program solves the three coupled, nonlinear partial differential equations, Poisson's Equation and the hole and electron continuity equations, simultaneously in two-dimensions using finite differences to discretize the equations and Newton's Method to linearize them. The variables solved for are the electrostatic potential and the hole and electron concentrations. Each linear system of equations is solved directly by Gaussian Elimination. Convergence of the Newton Iteration is assumed when the largest correction to the electrostatic potential or hole or electron quasi-potential is less than some predetermined error. A typical problem involves 2000 nodes with a Jacobi matrix of order 6000 and a bandwidth of 243.
Extremal optimization for Sherrington-Kirkpatrick spin glasses
NASA Astrophysics Data System (ADS)
Boettcher, S.
2005-08-01
Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.
Zhan, Yu; Liu, Changsheng; Zhang, Fengpeng; Qiu, Zhaoguo
2016-07-01
The laser ultrasonic generation of Rayleigh surface wave and longitudinal wave in an elastic plate is studied by experiment and finite element method. In order to eliminate the measurement error and the time delay of the experimental system, the linear fitting method of experimental data is applied. The finite element analysis software ABAQUS is used to simulate the propagation of Rayleigh surface wave and longitudinal wave caused by laser excitation on a sheet metal sample surface. The equivalent load method is proposed and applied. The pulsed laser is equivalent to the surface load in time and space domain to meet the Gaussian profile. The relationship between the physical parameters of the laser and the load is established by the correction factor. The numerical solution is in good agreement with the experimental result. The simple and effective numerical and experimental methods for laser ultrasonic measurement of the elastic constants are demonstrated. Copyright © 2016. Published by Elsevier B.V.
On the performance of energy detection-based CR with SC diversity over IG channel
NASA Astrophysics Data System (ADS)
Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka
2017-12-01
Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Non-Gaussian microwave background fluctuations from nonlinear gravitational effects
NASA Technical Reports Server (NTRS)
Salopek, D. S.; Kunstatter, G. (Editor)
1991-01-01
Whether the statistics of primordial fluctuations for structure formation are Gaussian or otherwise may be determined if the Cosmic Background Explorer (COBE) Satellite makes a detection of the cosmic microwave-background temperature anisotropy delta T(sub CMB)/T(sub CMB). Non-Gaussian fluctuations may be generated in the chaotic inflationary model if two scalar fields interact nonlinearly with gravity. Theoretical contour maps are calculated for the resulting Sachs-Wolfe temperature fluctuations at large angular scales (greater than 3 degrees). In the long-wavelength approximation, one can confidently determine the nonlinear evolution of quantum noise with gravity during the inflationary epoch because: (1) different spatial points are no longer in causal contact; and (2) quantum gravity corrections are typically small-- it is sufficient to model the system using classical random fields. If the potential for two scalar fields V(phi sub 1, phi sub 2) possesses a sharp feature, then non-Gaussian fluctuations may arise. An explicit model is given where cold spots in delta T(sub CMB)/T(sub CMB) maps are suppressed as compared to the Gaussian case. The fluctuations are essentially scale-invariant.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
NASA Astrophysics Data System (ADS)
Thomas, Christian L.
2006-06-01
Analysis and results (Chapters 2-5) of the full 7 year Macho Project dataset toward the Galactic bulge are presented. A total of 450 high quality, relatively large signal-to-noise ratio, events are found, including several events exhibiting exotic effects, and lensing events on possible Sagittarius dwarf galaxy stars. We examine the problem of blending in our sample and conclude that the subset of red clump giants are minimally blended. Using 42 red clump giant events near the Galactic center we calculate the optical depth toward the Galactic bulge to be t = [Special characters omitted.] × 10 -6 at ( l, b ) = ([Special characters omitted.] ) with a gradient of (1.06 ± 0.71) × 10 -6 deg -1 in latitude, and (0.29±0.43) × 10 -6 deg -1 in longitude, bringing measurements into consistency with the models for the first time. In Chapter 6 we reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g. Wozniak & Paczynski) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific points along the light curve (peak region and wings) of high magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction, and study the importance of non- Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth. In Chapter 7 we present work-in-progress on the possibility of correcting standard candle luminosities for the magnification due to weak lensing. We consider the importance of lenses in different mass ranges and look at the contribution from lenses that could not be observed. We conclude that it may be possible to perform this correction with relatively high precision (1-2%) and discuss possible sources of error and methods of improving our model.
2016-01-01
Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In contrast, a Gaussian signal detection model, which posits that the level of confidence that an item is "old" or "new" is a function of its continuous strength value, provided a good account of the data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Optimum threshold selection method of centroid computation for Gaussian spot
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; Wang, Caixia
2015-10-01
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions. Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise. Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
Adaptive Quadrature Detection for Multicarrier Continuous-Variable Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Gyongyosi, Laszlo; Imre, Sandor
2015-03-01
We propose the adaptive quadrature detection for multicarrier continuous-variable quantum key distribution (CVQKD). A multicarrier CVQKD scheme uses Gaussian subcarrier continuous variables for the information conveying and Gaussian sub-channels for the transmission. The proposed multicarrier detection scheme dynamically adapts to the sub-channel conditions using a corresponding statistics which is provided by our sophisticated sub-channel estimation procedure. The sub-channel estimation phase determines the transmittance coefficients of the sub-channels, which information are used further in the adaptive quadrature decoding process. We define the technique called subcarrier spreading to estimate the transmittance conditions of the sub-channels with a theoretical error-minimum in the presence of a Gaussian noise. We introduce the terms of single and collective adaptive quadrature detection. We also extend the results for a multiuser multicarrier CVQKD scenario. We prove the achievable error probabilities, the signal-to-noise ratios, and quantify the attributes of the framework. The adaptive detection scheme allows to utilize the extra resources of multicarrier CVQKD and to maximize the amount of transmittable information. This work was partially supported by the GOP-1.1.1-11-2012-0092 (Secure quantum key distribution between two units on optical fiber network) project sponsored by the EU and European Structural Fund, and by the COST Action MP1006.
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
A Variational Approach to Simultaneous Image Segmentation and Bias Correction.
Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong
2015-08-01
This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.
Supporting Dictation Speech Recognition Error Correction: The Impact of External Information
ERIC Educational Resources Information Center
Shi, Yongmei; Zhou, Lina
2011-01-01
Although speech recognition technology has made remarkable progress, its wide adoption is still restricted by notable effort made and frustration experienced by users while correcting speech recognition errors. One of the promising ways to improve error correction is by providing user support. Although support mechanisms have been proposed for…
A Hybrid Approach for Correcting Grammatical Errors
ERIC Educational Resources Information Center
Lee, Kiyoung; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun
2015-01-01
This paper presents a hybrid approach for correcting grammatical errors in the sentences uttered by Korean learners of English. The error correction system plays an important role in GenieTutor, which is a dialogue-based English learning system designed to teach English to Korean students. During the talk with GenieTutor, grammatical error…
A Comparison of Error-Correction Procedures on Skill Acquisition during Discrete-Trial Instruction
ERIC Educational Resources Information Center
Carroll, Regina A.; Joachim, Brad T.; St. Peter, Claire C.; Robinson, Nicole
2015-01-01
Previous research supports the use of a variety of error-correction procedures to facilitate skill acquisition during discrete-trial instruction. We used an adapted alternating treatments design to compare the effects of 4 commonly used error-correction procedures on skill acquisition for 2 children with attention deficit hyperactivity disorder…
The Effect of Error Correction Feedback on the Collocation Competence of Iranian EFL Learners
ERIC Educational Resources Information Center
Jafarpour, Ali Akbar; Sharifi, Abolghasem
2012-01-01
Collocations are one of the most important elements in language proficiency but the effect of error correction feedback of collocations has not been thoroughly examined. Some researchers report the usefulness and importance of error correction (Hyland, 1990; Bartram & Walton, 1991; Ferris, 1999; Chandler, 2003), while others showed that error…
A Support System for Error Correction Questions in Programming Education
ERIC Educational Resources Information Center
Hachisu, Yoshinari; Yoshida, Atsushi
2014-01-01
For supporting the education of debugging skills, we propose a system for generating error correction questions of programs and checking the correctness. The system generates HTML files for answering questions and CGI programs for checking answers. Learners read and answer questions on Web browsers. For management of error injection, we have…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
..., Medicare--Hospital Insurance; and Program No. 93.774, Medicare-- Supplementary Medical Insurance Program.... SUMMARY: This document corrects a typographical error that appeared in the notice published in the Federal... typographical error that is identified and corrected in the Correction of Errors section below. II. Summary of...
Term Cancellations in Computing Floating-Point Gröbner Bases
NASA Astrophysics Data System (ADS)
Sasaki, Tateaki; Kako, Fujio
We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.
Comparative test on several forms of background error covariance in 3DVar
NASA Astrophysics Data System (ADS)
Shao, Aimei
2013-04-01
The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the historical data; (6) similar to (5), but a localization process is performed; (7) B matrix is estimated by NMC method but error variance is reduced by 1.7 times in order that the value is close to that calculated from the true forecast error samples; (8) similar to (7), but the localization similar to (6) is performed. Experimental results with the different B matrixes show that for the Gaussian-type B matrix the characteristic lengths calculated from the true error samples don't bring a good analysis results. However, the reduced characteristic lengths (about half of the original one) can lead to a good analysis. If the B matrix estimated directly from the historical data is used in 3DVar, the assimilation effect can not reach to the best. The better assimilation results are generated with the application of reduced characteristic length and localization. Even so, it hasn't obvious advantage compared with Gaussian-type B matrix with the optimal characteristic length. It implies that the Gaussian-type B matrix, widely used for operational 3DVar system, can get a good analysis with the appropriate characteristic lengths. The crucial problem is how to determine the appropriate characteristic lengths. (This work is supported by the National Natural Science Foundation of China (41275102, 40875063), and the Fundamental Research Funds for the Central Universities (lzujbky-2010-9) )
Counteracting structural errors in ensemble forecast of influenza outbreaks.
Pei, Sen; Shaman, Jeffrey
2017-10-13
For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
Peeling Away Timing Error in NetFlow Data
NASA Astrophysics Data System (ADS)
Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin
In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.
Automatic Recognition of Phonemes Using a Syntactic Processor for Error Correction.
1980-12-01
OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS AFIT/GE/EE/8D-45 Robert B. ’Taylor 2Lt USAF Approved for public release...distribution unlimilted. AbP AFIT/GE/EE/ 80D-45 AUTOMATIC RECOGNITION OF PHONEMES USING A SYNTACTIC PROCESSOR FOR ERROR CORRECTION THESIS Presented to the...Testing ..................... 37 Bayes Decision Rule for Minimum Error ........... 37 Bayes Decision Rule for Minimum Risk ............ 39 Mini Max Test
Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar
Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM
2008-06-24
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Error correcting circuit design with carbon nanotube field effect transistors
NASA Astrophysics Data System (ADS)
Liu, Xiaoqiang; Cai, Li; Yang, Xiaokuo; Liu, Baojun; Liu, Zhongyong
2018-03-01
In this work, a parallel error correcting circuit based on (7, 4) Hamming code is designed and implemented with carbon nanotube field effect transistors, and its function is validated by simulation in HSpice with the Stanford model. A grouping method which is able to correct multiple bit errors in 16-bit and 32-bit application is proposed, and its error correction capability is analyzed. Performance of circuits implemented with CNTFETs and traditional MOSFETs respectively is also compared, and the former shows a 34.4% decrement of layout area and a 56.9% decrement of power consumption.
Beyond multi-fractals: surrogate time series and fields
NASA Astrophysics Data System (ADS)
Venema, V.; Simmer, C.
2007-12-01
Most natural complex are characterised by variability on a large range of temporal and spatial scales. The two main methodologies to generate such structures are Fourier/FARIMA based algorithms and multifractal methods. The former is restricted to Gaussian data, whereas the latter requires the structure to be self-similar. This work will present so-called surrogate data as an alternative that works with any (empirical) distribution and power spectrum. The best-known surrogate algorithm is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm. We have studied six different geophysical time series (two clouds, runoff of a small and a large river, temperature and rain) and their surrogates. The power spectra and consequently the 2nd order structure functions were replicated accurately. Even the fourth order structure function was more accurately reproduced by the surrogates as would be possible by a fractal method, because the measured structure deviated too strong from fractal scaling. Only in case of the daily rain sums a fractal method could have been more accurate. Just as Fourier and multifractal methods, the current surrogates are not able to model the asymmetric increment distributions observed for runoff, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found differences for the structure functions on small scales. Surrogate methods are especially valuable for empirical studies, because the time series and fields that are generated are able to mimic measured variables accurately. Our main application is radiative transfer through structured clouds. Like many geophysical fields, clouds can only be sampled sparsely, e.g. with in-situ airborne instruments. However, for radiative transfer calculations we need full 3-dimensional cloud fields. A first study relating the measured properties of the cloud droplets and the radiative properties of the cloud field by generating surrogate cloud fields yielded good results within the measurement error. A further test of the suitability of the surrogate clouds for radiative transfer is evaluated by comparing the radiative properties of model cloud fields of sparse cumulus and stratocumulus with their surrogate fields. The bias and root mean square error in various radiative properties is small and the deviations in the radiances and irradiances are not statistically significant, i.e. these deviations can be attributed to the Monte Carlo noise of the radiative transfer calculations. We compared these results with optical properties of synthetic clouds that have either the correct distribution (but no spatial correlations) or the correct power spectrum (but a Gaussian distribution). These clouds did show statistical significant deviations. For more information see: http://www.meteo.uni-bonn.de/venema/themes/surrogates/
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
On codes with multi-level error-correction capabilities
NASA Technical Reports Server (NTRS)
Lin, Shu
1987-01-01
In conventional coding for error control, all the information symbols of a message are regarded equally significant, and hence codes are devised to provide equal protection for each information symbol against channel errors. However, in some occasions, some information symbols in a message are more significant than the other symbols. As a result, it is desired to devise codes with multilevel error-correcting capabilities. Another situation where codes with multi-level error-correcting capabilities are desired is in broadcast communication systems. An m-user broadcast channel has one input and m outputs. The single input and each output form a component channel. The component channels may have different noise levels, and hence the messages transmitted over the component channels require different levels of protection against errors. Block codes with multi-level error-correcting capabilities are also known as unequal error protection (UEP) codes. Structural properties of these codes are derived. Based on these structural properties, two classes of UEP codes are constructed.
Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A
2007-02-01
The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2014 CFR
2014-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
42 CFR 412.278 - Administrator's review.
Code of Federal Regulations, 2013 CFR
2013-10-01
... or computational errors, or to correct the decision if the evidence that was considered in making the... discretion, may amend the decision to correct mathematical or computational errors, or to correct the...
Phenological features for winter rapeseed identification in Ukraine using satellite data
NASA Astrophysics Data System (ADS)
Kravchenko, Oleksiy
2014-05-01
Winter rapeseed is one of the major oilseed crops in Ukraine that is characterized by high profitability and often grown with violations of the crop rotation requirements leading to soil degradation. Therefore, rapeseed identification using satellite data is a promising direction for operational estimation of the crop acreage and rotation control. Crop acreage of rapeseed is about 0.5-3% of total area of Ukraine, which poses a major problem for identification using satellite data [1]. While winter rapeseed could be classified using biomass features observed during autumn vegetation, these features are quite unstable due to field to field differences in planting dates as well as spatial and temporal heterogeneity in soil moisture availability. Due to this fact autumn biomass level features could be used only locally (at NUTS-3 level) and are not suitable for large-scale country wide crop identification. We propose to use crop parameters at flowering phenological stage for crop identification and present a method for parameters estimation using time-series of moderate resolution data. Rapeseed flowering could be observed as a bell-shaped peak in red reflectance time series. However the duration of the flowering period that is observable by satellite data is about only two weeks, which is quite short period taking into account inevitable cloud coverage issues. Thus we need daily time series to resolve the flowering peak and due to this we are limited to moderate resolution data. We used daily atmospherically corrected MODIS data coming from Terra and Aqua satellites within 90-160 DOY period to perform features calculations. Empirical BRDF correction is used to minimize angular effects. We used Gaussian Processes Regression (GPR) for temporal interpolation to minimize errors due to residual could coverage, atmospheric correction and a mixed pixel problems. We estimate 12 parameters for each time series. They are red and near-infrared (NIR) reflectance and the timing at four stages: before and after the flowering, at the peak flowering and at the maximum NIR level. We used Support Vector Machine for data classification. The most relevant feature for classification is flowering peak timing followed by flowering peak magnitude. The dependency of the peak time on the latitude as a sole feature could be used to reject 90% of non-rapeseed pixels that is greatly reduces the imbalance of the classification problem. To assess the accuracy of our approach we performed a stratified area frame sampling survey in Odessa region (NUTS-2 level) in 2013. The omission error is about 12.6% while commission error is higher at the level of 22%. This fact is explained by high viewing angle composition criterion that is used in our approach to mitigate high cloud coverage problem. However the errors are quite stable spatially and could be easily corrected by regression technique. To do this we performed area estimation for Odessa region using regression estimator and obtained good area estimation accuracy with 4.6% error (1σ). [1] Gallego, F.J., et al., Efficiency assessment of using satellite data for crop area estimation in Ukraine. Int. J. Appl. Earth Observ. Geoinf. (2014), http://dx.doi.org/10.1016/j.jag.2013.12.013
How to Correct a Task Error: Task-Switch Effects Following Different Types of Error Correction
ERIC Educational Resources Information Center
Steinhauser, Marco
2010-01-01
It has been proposed that switch costs in task switching reflect the strengthening of task-related associations and that strengthening is triggered by response execution. The present study tested the hypothesis that only task-related responses are able to trigger strengthening. Effects of task strengthening caused by error corrections were…
A non-Gaussian approach to risk measures
NASA Astrophysics Data System (ADS)
Bormetti, Giacomo; Cisana, Enrica; Montagna, Guido; Nicrosini, Oreste
2007-03-01
Reliable calculations of financial risk require that the fat-tailed nature of prices changes is included in risk measures. To this end, a non-Gaussian approach to financial risk management is presented, modelling the power-law tails of the returns distribution in terms of a Student- t distribution. Non-Gaussian closed-form solutions for value-at-risk and expected shortfall are obtained and standard formulae known in the literature under the normality assumption are recovered as a special case. The implications of the approach for risk management are demonstrated through an empirical analysis of financial time series from the Italian stock market and in comparison with the results of the most widely used procedures of quantitative finance. Particular attention is paid to quantify the size of the errors affecting the market risk measures obtained according to different methodologies, by employing a bootstrap technique.
Čársky, Petr; Čurík, Roman; Varga, Štefan
2012-03-21
The objective of this paper is to show that the density fitting (resolution of the identity approximation) can also be applied to Coulomb integrals of the type (k(1)(1)k(2)(1)|g(1)(2)g(2)(2)), where k and g symbols refer to plane-wave functions and gaussians, respectively. We have shown how to achieve the accuracy of these integrals that is needed in wave-function MO and density functional theory-type calculations using mixed Gaussian and plane-wave basis sets. The crucial issues for achieving such a high accuracy are application of constraints for conservation of the number electrons and components of the dipole moment, optimization of the auxiliary basis set, and elimination of round-off errors in the matrix inversion. © 2012 American Institute of Physics
Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror
NASA Astrophysics Data System (ADS)
Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu
2017-02-01
Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.
Local concurrent error detection and correction in data structures using virtual backpointers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.C.J.; Chen, P.P.; Fuchs, W.K.
1989-11-01
A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.
NASA Astrophysics Data System (ADS)
Vieira, Daniel; Krems, Roman
2017-04-01
Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.
NASA Astrophysics Data System (ADS)
Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel
2018-05-01
We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin
2009-01-01
We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.
ERIC Educational Resources Information Center
Starns, Jeffrey J.; Rotello, Caren M.; Hautus, Michael J.
2014-01-01
We tested the dual process and unequal variance signal detection models by jointly modeling recognition and source confidence ratings. The 2 approaches make unique predictions for the slope of the recognition memory zROC function for items with correct versus incorrect source decisions. The standard bivariate Gaussian version of the unequal…
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, C. C.; Chen, P. P.; Fuchs, W. K.
1987-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data structures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared databased of Virtual Double Linked Lists.
Local concurrent error detection and correction in data structures using virtual backpointers
NASA Technical Reports Server (NTRS)
Li, Chung-Chi Jim; Chen, Paul Peichuan; Fuchs, W. Kent
1989-01-01
A new technique, based on virtual backpointers, for local concurrent error detection and correction in linked data strutures is presented. Two new data structures, the Virtual Double Linked List, and the B-tree with Virtual Backpointers, are described. For these structures, double errors can be detected in 0(1) time and errors detected during forward moves can be corrected in 0(1) time. The application of a concurrent auditor process to data structure error detection and correction is analyzed, and an implementation is described, to determine the effect on mean time to failure of a multi-user shared database system. The implementation utilizes a Sequent shared memory multiprocessor system operating on a shared database of Virtual Double Linked Lists.
Super-Gaussian laser intensity output formation by means of adaptive optics
NASA Astrophysics Data System (ADS)
Cherezova, T. Y.; Chesnokov, S. S.; Kaptsov, L. N.; Kudryashov, A. V.
1998-10-01
An optical resonator using an intracavity adaptive mirror with three concentric rings of controlling electrodes, which produc low loss and large beamwidth super-Gaussian output of order 4, 6, 8, is analyzed. An inverse propagation method is used to determine the appropriate shape of the adaptive mirror. The mirror reproduces the shape with minimal RMS error by combining weights of experimentally measured response functions of the mirror sample. The voltages applied to each mirror electrode are calculated. Practical design parameters such as construction of an adaptive mirror, Fresnel numbers, and geometric factor are discussed.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
Detection and correction of prescription errors by an emergency department pharmacy service.
Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald
2014-05-01
Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
A comparison of gantry-mounted x-ray-based real-time target tracking methods.
Montanaro, Tim; Nguyen, Doan Trang; Keall, Paul J; Booth, Jeremy; Caillet, Vincent; Eade, Thomas; Haddad, Carol; Shieh, Chun-Chien
2018-03-01
Most modern radiotherapy machines are built with a 2D kV imaging system. Combining this imaging system with a 2D-3D inference method would allow for a ready-made option for real-time 3D tumor tracking. This work investigates and compares the accuracy of four existing 2D-3D inference methods using both motion traces inferred from external surrogates and measured internally from implanted beacons. Tumor motion data from 160 fractions (46 thoracic/abdominal patients) of Synchrony traces (inferred traces), and 28 fractions (7 lung patients) of Calypso traces (internal traces) from the LIGHT SABR trial (NCT02514512) were used in this study. The motion traces were used as the ground truth. The ground truth trajectories were used in silico to generate 2D positions projected on the kV detector. These 2D traces were then passed to the 2D-3D inference methods: interdimensional correlation, Gaussian probability density function (PDF), arbitrary-shape PDF, and the Kalman filter. The inferred 3D positions were compared with the ground truth to determine tracking errors. The relationships between tracking error and motion magnitude, interdimensional correlation, and breathing periodicity index (BPI) were also investigated. Larger tracking errors were observed from the Calypso traces, with RMS and 95th percentile 3D errors of 0.84-1.25 mm and 1.72-2.64 mm, compared to 0.45-0.68 mm and 0.74-1.13 mm from the Synchrony traces. The Gaussian PDF method was found to be the most accurate, followed by the Kalman filter, the interdimensional correlation method, and the arbitrary-shape PDF method. Tracking error was found to strongly and positively correlate with motion magnitude for both the Synchrony and Calypso traces and for all four methods. Interdimensional correlation and BPI were found to negatively correlate with tracking error only for the Synchrony traces. The Synchrony traces exhibited higher interdimensional correlation than the Calypso traces especially in the anterior-posterior direction. Inferred traces often exhibit higher interdimensional correlation, which are not true representation of thoracic/abdominal motion and may underestimate kV-based tracking errors. The use of internal traces acquired from systems such as Calypso is advised for future kV-based tracking studies. The Gaussian PDF method is the most accurate 2D-3D inference method for tracking thoracic/abdominal targets. Motion magnitude has significant impact on 2D-3D inference error, and should be considered when estimating kV-based tracking error. © 2018 American Association of Physicists in Medicine.
An Ensemble Method for Spelling Correction in Consumer Health Questions
Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina
2015-01-01
Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208
Testing 3D landform quantification methods with synthetic drumlins in a real digital elevation model
NASA Astrophysics Data System (ADS)
Hillier, John K.; Smith, Mike J.
2012-06-01
Metrics such as height and volume quantifying the 3D morphology of landforms are important observations that reflect and constrain Earth surface processes. Errors in such measurements are, however, poorly understood. A novel approach, using statistically valid ‘synthetic' landscapes to quantify the errors is presented. The utility of the approach is illustrated using a case study of 184 drumlins observed in Scotland as quantified from a Digital Elevation Model (DEM) by the ‘cookie cutter' extraction method. To create the synthetic DEMs, observed drumlins were removed from the measured DEM and replaced by elongate 3D Gaussian ones of equivalent dimensions positioned randomly with respect to the ‘noise' (e.g. trees) and regional trends (e.g. hills) that cause the errors. Then, errors in the cookie cutter extraction method were investigated by using it to quantify these ‘synthetic' drumlins, whose location and size is known. Thus, the approach determines which key metrics are recovered accurately. For example, mean height of 6.8 m is recovered poorly at 12.5 ± 0.6 (2σ) m, but mean volume is recovered correctly. Additionally, quantification methods can be compared: A variant on the cookie cutter using an un-tensioned spline induced about twice (× 1.79) as much error. Finally, a previously reportedly statistically significant (p = 0.007) difference in mean volume between sub-populations of different ages, which may reflect formational processes, is demonstrated to be only 30-50% likely to exist in reality. Critically, the synthetic DEMs are demonstrated to realistically model parameter recovery, primarily because they are still almost entirely the original landscape. Results are insensitive to the exact method used to create the synthetic DEMs, and the approach could be readily adapted to assess a variety of landforms (e.g. craters, dunes and volcanoes).
NASA Astrophysics Data System (ADS)
Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.
2016-12-01
Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.
The orbit and transit prospects for β pictoris b constrained with one milliarcsecond astrometry
Wang, Jason J.; Graham, James R.; Pueyo, Laurent; ...
2016-10-03
A principal scientific goal of the Gemini Planet Imager (GPI) is obtaining milliarcsecond astrometry to constrain exoplanet orbits. However, astrometry of directly imaged exoplanets is subject to biases, systematic errors, and speckle noise. Here, we describe an analytical procedure to forward model the signal of an exoplanet that accounts for both the observing strategy (angular and spectral differential imaging) and the data reduction method (Karhunen–Loève Image Projection algorithm). We use this forward model to measure the position of an exoplanet in a Bayesian framework employing Gaussian processes and Markov-chain Monte Carlo to account for correlated noise. In the case ofmore » GPI data on β Pic b, this technique, which we call Bayesian KLIP-FM Astrometry (BKA), outperforms previous techniques and yields 1σ errors at or below the one milliarcsecond level. We validate BKA by fitting a Keplerian orbit to 12 GPI observations along with previous astrometry from other instruments. The statistical properties of the residuals confirm that BKA is accurate and correctly estimates astrometric errors. Our constraints on the orbit of β Pic b firmly rule out the possibility of a transit of the planet at 10-σ significance. However, we confirm that the Hill sphere of β Pic b will transit, giving us a rare chance to probe the circumplanetary environment of a young, evolving exoplanet. As a result, we provide an ephemeris for photometric monitoring of the Hill sphere transit event, which will begin at the start of April in 2017 and finish at the end of January in 2018.« less
Effect of single vision soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen
2012-07-01
To investigate changes in peripheral refraction with under-, full, and over-correction of central refraction with commercially available single vision soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere SCLs to under-correct (+0.75 DS), fully correct, and over-correct (-0.75 DS) their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with different levels of SCL central refractive error correction. The uncorrected refractive error was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared to center at 30 and 35° in the temporal visual field (VF) in low myopes and at 30 and 35° in the temporal VF and 10, 30, and 35° in the nasal VF in moderate myopes. All levels of SCL correction caused a hyperopic shift in refraction at all locations in the horizontal VF. The smallest hyperopic shift was demonstrated with under-correction followed by full correction and then by over-correction of central refractive error. An increase in relative peripheral hyperopia was measured with full correction SCLs compared with no correction in both low and moderate myopes. However, no difference in relative peripheral refraction profiles were found between under-, full, and over-correction. Under-, full, and over-correction of central refractive error with single vision SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. All levels of SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction, to experience absolute hyperopic defocus. This peripheral hyperopia may be a possible cause of myopia progression reported with different types and levels of myopia correction.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Evolution of CMB spectral distortion anisotropies and tests of primordial non-Gaussianity
NASA Astrophysics Data System (ADS)
Chluba, Jens; Dimastrogiovanni, Emanuela; Amin, Mustafa A.; Kamionkowski, Marc
2017-04-01
Anisotropies in distortions to the frequency spectrum of the cosmic microwave background (CMB) can be created through spatially varying heating processes in the early Universe. For instance, the dissipation of small-scale acoustic modes does create distortion anisotropies, in particular for non-Gaussian primordial perturbations. In this work, we derive approximations that allow describing the associated distortion field. We provide a systematic formulation of the problem using Fourier-space window functions, clarifying and generalizing previous approximations. Our expressions highlight the fact that the amplitudes of the spectral-distortion fluctuations induced by non-Gaussianity depend also on the homogeneous value of those distortions. Absolute measurements are thus required to obtain model-independent distortion constraints on primordial non-Gaussianity. We also include a simple description for the evolution of distortions through photon diffusion, showing that these corrections can usually be neglected. Our formulation provides a systematic framework for computing higher order correlation functions of distortions with CMB temperature anisotropies and can be extended to describe correlations with polarization anisotropies.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Weighted Feature Gaussian Kernel SVM for Emotion Recognition
Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443
An Analysis of College Students' Attitudes towards Error Correction in EFL Context
ERIC Educational Resources Information Center
Zhu, Honglin
2010-01-01
This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…
27 CFR 46.119 - Errors disclosed by taxpayers.
Code of Federal Regulations, 2010 CFR
2010-04-01
... that the name and address are correctly stated; if not, the taxpayer must return the stamp to the TTB officer who issued it, with a statement showing the nature of the error and the correct name or address... stamp with that of the Form 5630.5t in TTB files, correct the error if made in the TTB office, and...
ERIC Educational Resources Information Center
Alamri, Bushra; Fawzi, Hala Hassan
2016-01-01
Error correction has been one of the core areas in the field of English language teaching. It is "seen as a form of feedback given to learners on their language use" (Amara, 2015). Many studies investigated the use of different techniques to correct students' oral errors. However, only a few focused on students' preferences and attitude…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojahn, Christopher K.
2015-10-20
This HDL code (hereafter referred to as "software") implements circuitry in Xilinx Virtex-5QV Field Programmable Gate Array (FPGA) hardware. This software allows the device to self-check the consistency of its own configuration memory for radiation-induced errors. The software then provides the capability to correct any single-bit errors detected in the memory using the device's inherent circuitry, or reload corrupted memory frames when larger errors occur that cannot be corrected with the device's built-in error correction and detection scheme.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-07-20
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas
2010-08-17
Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.
NASA Astrophysics Data System (ADS)
Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.
2017-12-01
Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.
ERIC Educational Resources Information Center
Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad
2010-01-01
This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…
A Systematic Error Correction Method for TOVS Radiances
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
Slow-roll corrections in multi-field inflation: a separate universes approach
NASA Astrophysics Data System (ADS)
Karčiauskas, Mindaugas; Kohri, Kazunori; Mori, Taro; White, Jonathan
2018-05-01
In view of cosmological parameters being measured to ever higher precision, theoretical predictions must also be computed to an equally high level of precision. In this work we investigate the impact on such predictions of relaxing some of the simplifying assumptions often used in these computations. In particular, we investigate the importance of slow-roll corrections in the computation of multi-field inflation observables, such as the amplitude of the scalar spectrum Pζ, its spectral tilt ns, the tensor-to-scalar ratio r and the non-Gaussianity parameter fNL. To this end we use the separate universes approach and δ N formalism, which allows us to consider slow-roll corrections to the non-Gaussianity of the primordial curvature perturbation as well as corrections to its two-point statistics. In the context of the δ N expansion, we divide slow-roll corrections into two categories: those associated with calculating the correlation functions of the field perturbations on the initial flat hypersurface and those associated with determining the derivatives of the e-folding number with respect to the field values on the initial flat hypersurface. Using the results of Nakamura & Stewart '96, corrections of the first kind can be written in a compact form. Corrections of the second kind arise from using different levels of slow-roll approximation in solving for the super-horizon evolution, which in turn corresponds to using different levels of slow-roll approximation in the background equations of motion. We consider four different levels of approximation and apply the results to a few example models. The various approximations are also compared to exact numerical solutions.
Quantum steganography and quantum error-correction
NASA Astrophysics Data System (ADS)
Shaw, Bilal A.
Quantum error-correcting codes have been the cornerstone of research in quantum information science (QIS) for more than a decade. Without their conception, quantum computers would be a footnote in the history of science. When researchers embraced the idea that we live in a world where the effects of a noisy environment cannot completely be stripped away from the operations of a quantum computer, the natural way forward was to think about importing classical coding theory into the quantum arena to give birth to quantum error-correcting codes which could help in mitigating the debilitating effects of decoherence on quantum data. We first talk about the six-qubit quantum error-correcting code and show its connections to entanglement-assisted error-correcting coding theory and then to subsystem codes. This code bridges the gap between the five-qubit (perfect) and Steane codes. We discuss two methods to encode one qubit into six physical qubits. Each of the two examples corrects an arbitrary single-qubit error. The first example is a degenerate six-qubit quantum error-correcting code. We explicitly provide the stabilizer generators, encoding circuits, codewords, logical Pauli operators, and logical CNOT operator for this code. We also show how to convert this code into a non-trivial subsystem code that saturates the subsystem Singleton bound. We then prove that a six-qubit code without entanglement assistance cannot simultaneously possess a Calderbank-Shor-Steane (CSS) stabilizer and correct an arbitrary single-qubit error. A corollary of this result is that the Steane seven-qubit code is the smallest single-error correcting CSS code. Our second example is the construction of a non-degenerate six-qubit CSS entanglement-assisted code. This code uses one bit of entanglement (an ebit) shared between the sender (Alice) and the receiver (Bob) and corrects an arbitrary single-qubit error. The code we obtain is globally equivalent to the Steane seven-qubit code and thus corrects an arbitrary error on the receiver's half of the ebit as well. We prove that this code is the smallest code with a CSS structure that uses only one ebit and corrects an arbitrary single-qubit error on the sender's side. We discuss the advantages and disadvantages for each of the two codes. In the second half of this thesis we explore the yet uncharted and relatively undiscovered area of quantum steganography. Steganography is the process of hiding secret information by embedding it in an "innocent" message. We present protocols for hiding quantum information in a codeword of a quantum error-correcting code passing through a channel. Using either a shared classical secret key or shared entanglement Alice disguises her information as errors in the channel. Bob can retrieve the hidden information, but an eavesdropper (Eve) with the power to monitor the channel, but without the secret key, cannot distinguish the message from channel noise. We analyze how difficult it is for Eve to detect the presence of secret messages, and estimate rates of steganographic communication and secret key consumption for certain protocols. We also provide an example of how Alice hides quantum information in the perfect code when the underlying channel between Bob and her is the depolarizing channel. Using this scheme Alice can hide up to four stego-qubits.
Five-wave-packet quantum error correction based on continuous-variable cluster entanglement
Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi
2015-01-01
Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395
Mode-medium instability and its correction with a Gaussian-reflectivity mirror
NASA Technical Reports Server (NTRS)
Webster, K. L.; Sung, C. C.
1992-01-01
A high-power CO2 laser beam is known to deteriorate after a few microseconds due to a mode-medium instability (MMI) which results from an intensity-dependent heating rate related to the vibrational-to-translational decay of the upper and lower CO2 lasing levels. An iterative numerical technique is developed to model the time evolution of the beam as it is affected by the MMI. The technique is used to study the MMI in an unstable CO2 resonator with a hard-edge output mirror for different parameters like the Fresnel number and the gas density. The results show that the mode of the hard edge unstable resonator deteriorates because of the diffraction ripples in the mode. A Gaussian-reflectivity mirror was used to correct the MMI. This mirror produces a smoother intensity profile which significantly reduces the effects of the MMI. Quantitative results on peak density variation and beam quality are presented.
Mode-medium instability and its correction with a Gaussian reflectivity mirror
NASA Technical Reports Server (NTRS)
Webster, K. L.; Sung, C. C.
1990-01-01
A high power CO2 laser beam is known to deteriorate after a few microseconds due to a mode-medium instability (MMI) which results from an intensity dependent heating rate related to the vibrational-to-translational decay of the upper and lower CO2 lasing levels. An iterative numerical technique is developed to model the time evolution of the beam as it is affected by the MMI. The technique is used to study the MMI in an unstable CO2 resonator with a hard-edge output mirror for different parameters like the Fresnel number and the gas density. The results show that the mode of the hard edge unstable resonator deteriorates because of the diffraction ripples in the mode. A Gaussian-reflectivity mirror was used to correct the MMI. This mirror produces a smoother intensity profile which significantly reduces the effects of the MMI. Quantitative results on peak density variation and beam quality are presented.
Error Correction for the JLEIC Ion Collider Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Guohui; Morozov, Vasiliy; Lin, Fanglei
2016-05-01
The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, andmore » chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.« less
NASA Astrophysics Data System (ADS)
Zhu, Tao; Wang, Anzhong; Kirsten, Klaus; Cleaver, Gerald; Sheng, Qin
2018-02-01
Loop quantum cosmology provides a resolution of the classical big bang singularity in the deep Planck era. The evolution, prior to the usual slow-roll inflation, naturally generates excited states at the onset of the slow-roll inflation. It is expected that these quantum gravitational effects could leave its fingerprints on the primordial perturbation spectrum and non-Gaussianity, and lead to some observational evidences in the cosmic microwave background. While the impact of the quantum effects on the primordial perturbation spectrum has been already studied and constrained by current data, in this paper we continue to study such effects but now on the non-Gaussianity of the primordial curvature perturbations. We present detailed and analytical calculations of the non-Gaussianity and show explicitly that the corrections due to the quantum effects are at the same magnitude of the slow-roll parameters in the observable scales and thus are well within current observational constraints. Despite this, we show that the non-Gaussianity in the squeezed limit can be enhanced at superhorizon scales and it is these effects that can yield a large statistical anisotropy on the power spectrum through the Erickcek-Kamionkowski-Carroll mechanism.
Studies on system and measuring method of far-field beam divergency in near field by Ronchi ruling
NASA Astrophysics Data System (ADS)
Zhou, Chenbo; Yang, Li; Ma, Wenli; Yan, Peiying; Fan, Tianquan; He, Shangfeng
1996-10-01
Up to now, as large as seven times of Rayleigh-range or more is needed in measuring the far-field Gaussian beam divergency. This method is very inconvenient for the determination of the output beam divergency of the industrial product such as He-Ne lasers and the measuring unit will occupy a large space. The measurement and the measuring accuracy will be greatly influenced by the environment. Application of the Ronchi ruling to the measurement of far-field divergency of Gaussian beam in near-field is analyzed in the paper. The theoretical research and the experiments show that this measuring method is convenient in industrial application. The measuring system consists of a precision mechanical unit which scans Gaussian beam with a microdisplaced Ronchi ruling, a signal sampling system, a single-chip microcomputer data processing system and an electronic unit with microprinter output. The characteristics of the system is stable and the repeatability errors of the system are low. The spot size and far-field divergency of visible Gaussian laser beam can be measured with the system.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Clark, Jeremy S C; Kaczmarczyk, Mariusz; Mongiało, Zbigniew; Ignaczak, Paweł; Czajkowski, Andrzej A; Klęsk, Przemysław; Ciechanowicz, Andrzej
2013-08-01
Gompertz-related distributions have dominated mortality studies for 187 years. However, nonrelated distributions also fit well to mortality data. These compete with the Gompertz and Gompertz-Makeham data when applied to data with varying extents of truncation, with no consensus as to preference. In contrast, Gaussian-related distributions are rarely applied, despite the fact that Lexis in 1879 suggested that the normal distribution itself fits well to the right of the mode. Study aims were therefore to compare skew-t fits to Human Mortality Database data, with Gompertz-nested distributions, by implementing maximum likelihood estimation functions (mle2, R package bbmle; coding given). Results showed skew-t fits obtained lower Bayesian information criterion values than Gompertz-nested distributions, applied to low-mortality country data, including 1711 and 1810 cohorts. As Gaussian-related distributions have now been found to have almost universal application to error theory, one conclusion could be that a Gaussian-related distribution might replace Gompertz-related distributions as the basis for mortality studies.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
NASA Astrophysics Data System (ADS)
Guo, Jinyun; Li, Wudong; Chang, Xiaotao; Zhu, Guangbin; Liu, Xin; Guo, Bin
2018-04-01
Water resource management is crucial for the economic and social development of Xinjiang, an arid area located in the Northwest China. In this paper, the time variations of gravity recovery and climate experiment (GRACE)-derived monthly gravity field models from 2003 January to 2013 December are analysed to study the terrestrial water storage (TWS) changes in Xinjiang using the multichannel singular spectrum analysis (MSSA) with a Gaussian smoothing radius of 400 km. As an extended singular spectrum analysis (SSA), MSSA is more flexible to deal with multivariate time-series in terms of estimating periodic components and trend, reducing noise and identifying patterns of similar spatiotemporal behaviour thanks to the data-adaptive nature of the base functions. Combining MSSA and Gaussian filter can not only obviously remove the north-south striping errors in the GRACE solutions but also reduce the leakage errors, which can increase the signal-to-noise ratio by comparing with the traditional procedure, that is, empirical decorrelation method followed with the Gaussian filtering. The spatiotemporal characteristics of TWS changes in Xinjiang were validated against the Global Land Dynamics Assimilation System, the Climate Prediction Center and in-situ precipitation data. The water storage in Xinjiang shows the relatively large fluctuation from 2003 January to 2013 December, with a drop from 2006 January to 2008 December due to the drought event and an obvious rise from 2009 January to 2010 December because of the high precipitation. Spatially, the TWS has been increasing in the south Xinjiang, but decreasing in the north Xinjiang. The minimum rate of water storage change is -4.4 mm yr-1 occurring in the central Tianshan Mountain.
Target Uncertainty Mediates Sensorimotor Error Correction
Vijayakumar, Sethu; Wolpert, Daniel M.
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323
Target Uncertainty Mediates Sensorimotor Error Correction.
Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M
2017-01-01
Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.
Refractive error and presbyopia among adults in Fiji.
Brian, Garry; Pearce, Matthew G; Ramke, Jacqueline
2011-04-01
To characterize refractive error, presbyopia and their correction among adults aged ≥ 40 years in Fiji, and contribute to a regional overview of these conditions. A population-based cross-sectional survey using multistage cluster random sampling. Presenting distance and near vision were measured and dilated slitlamp examination performed. The survey achieved 73.0% participation (n=1381). Presenting binocular distance vision ≥ 6/18 was achieved by 1223 participants. Another 79 had vision impaired by refractive error. Three of these were blind. At threshold 6/18, 204 participants had refractive error. Among these, 125 had spectacle-corrected presenting vision ≥ 6/18 ("met refractive error need"); 79 presented wearing no (n=74) or under-correcting (n=5) distance spectacles ("unmet refractive error need"). Presenting binocular near vision ≥ N8 was achieved by 833 participants. At threshold N8, 811 participants had presbyopia. Among these, 336 attained N8 with presenting near spectacles ("met presbyopia need"); 475 presented with no (n=402) or under-correcting (n=73) near spectacles ("unmet presbyopia need"). Rural residence was predictive of unmet refractive error (p=0.040) and presbyopia (p=0.016) need. Gender and household income source were not. Ethnicity-gender-age-domicile-adjusted to the Fiji population aged ≥ 40 years, "met refractive error need" was 10.3% (95% confidence interval [CI] 8.7-11.9%), "unmet refractive error need" was 4.8% (95%CI 3.6-5.9%), "refractive error correction coverage" was 68.3% (95%CI 54.4-82.2%),"met presbyopia need" was 24.6% (95%CI 22.4-26.9%), "unmet presbyopia need" was 33.8% (95%CI 31.3-36.3%), and "presbyopia correction coverage" was 42.2% (95%CI 37.6-46.8%). Fiji refraction and dispensing services should encourage uptake by rural dwellers and promote presbyopia correction. Lack of comparable data from neighbouring countries prevents a regional overview.
Coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.
ERIC Educational Resources Information Center
Duerdoth, Ian
2009-01-01
The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…
ERIC Educational Resources Information Center
Teba, Sourou Corneille
2017-01-01
The aim of this paper is firstly, to make teachers correct thoroughly students' errors with effective strategies. Secondly, it is an attempt to find out if teachers are interested themselves in errors correction in Beninese secondary schools. Finally, I would like to point out the effective strategies that an EFL teacher can use for errors…
Errors, error detection, error correction and hippocampal-region damage: data and theories.
MacKay, Donald G; Johnson, Laura W
2013-11-01
This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.
Impact of mismatched and misaligned laser light sheet profiles on PIV performance
NASA Astrophysics Data System (ADS)
Grayson, K.; de Silva, C. M.; Hutchins, N.; Marusic, I.
2018-01-01
The effect of mismatched or misaligned laser light sheet profiles on the quality of particle image velocimetry (PIV) results is considered in this study. Light sheet profiles with differing widths, shapes, or alignment can reduce the correlation between PIV images and increase experimental errors. Systematic PIV simulations isolate these behaviours to assess the sensitivity and implications of light sheet mismatch on measurements. The simulations in this work use flow fields from a turbulent boundary layer; however, the behaviours and impacts of laser profile mismatch are highly relevant to any fluid flow or PIV application. Experimental measurements from a turbulent boundary layer facility are incorporated, as well as additional simulations matched to experimental image characteristics, to validate the synthetic image analysis. Experimental laser profiles are captured using a modular laser profiling camera, designed to quantify the distribution of laser light sheet intensities and inform any corrective adjustments to an experimental configuration. Results suggest that an offset of just 1.35 standard deviations in the Gaussian light sheet intensity distributions can cause a 40% reduction in the average correlation coefficient and a 45% increase in spurious vectors. Errors in measured flow statistics are also amplified when two successive laser profiles are no longer well matched in alignment or intensity distribution. Consequently, an awareness of how laser light sheet overlap influences PIV results can guide faster setup of an experiment, as well as achieve superior experimental measurements.
A Range Correction for Icesat and Its Potential Impact on Ice-sheet Mass Balance Studies
NASA Technical Reports Server (NTRS)
Borsa, A. A.; Moholdt, G.; Fricker, H. A.; Brunt, Kelly M.
2014-01-01
We report on a previously undocumented range error in NASA's Ice, Cloud and land Elevation Satellite (ICESat) that degrades elevation precision and introduces a small but significant elevation trend over the ICESat mission period. This range error (the Gaussian-Centroid or 'G-C'offset) varies on a shot-to-shot basis and exhibits increasing scatter when laser transmit energies fall below 20 mJ. Although the G-C offset is uncorrelated over periods less than1 day, it evolves over the life of each of ICESat's three lasers in a series of ramps and jumps that give rise to spurious elevation trends of -0.92 to -1.90 cm yr(exp -1), depending on the time period considered. Using ICESat data over the Ross and Filchner-Ronne ice shelves we show that (1) the G-C offset introduces significant biases in ice-shelf mass balance estimates, and (2) the mass balance bias can vary between regions because of different temporal samplings of ICESat.We can reproduce the effect of the G-C offset over these two ice shelves by fitting trends to sample-weighted mean G-C offsets for each campaign, suggesting that it may not be necessary to fully repeat earlier ICESat studies to determine the impact of the G-C offset on ice-sheet mass balance estimates.
Verveer, P. J; Gemkow, M. J; Jovin, T. M
1999-01-01
We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.
Efficiency of single-particle engines
NASA Astrophysics Data System (ADS)
Proesmans, Karel; Driesen, Cedric; Cleuren, Bart; Van den Broeck, Christian
2015-09-01
We study the efficiency of a single-particle Szilard and Carnot engine. Within a first order correction to the quasistatic limit, the work distribution is found to be Gaussian and the correction factor to average work and efficiency only depends on the piston speed. The stochastic efficiency is studied for both models and the recent findings on efficiency fluctuations are confirmed numerically. Special features are revealed in the zero-temperature limit.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
Novel theory for propagation of tilted Gaussian beam through aligned optical system
NASA Astrophysics Data System (ADS)
Xia, Lei; Gao, Yunguo; Han, Xudong
2017-03-01
A novel theory for tilted beam propagation is established in this paper. By setting the propagation direction of the tilted beam as the new optical axis, we establish a virtual optical system that is aligned with the new optical axis. Within the first order approximation of the tilt and off-axis, the propagation of the tilted beam is studied in the virtual system instead of the actual system. To achieve more accurate optical field distributions of tilted Gaussian beams, a complete diffraction integral for a misaligned optical system is derived by using the matrix theory with angular momentums. The theory demonstrates that a tilted TEM00 Gaussian beam passing through an aligned optical element transforms into a decentered Gaussian beam along the propagation direction. The deviations between the peak intensity axis of the decentered Gaussian beam and the new optical axis have linear relationships with the misalignments in the virtual system. ZEMAX simulation of a tilted beam through a thick lens exposed to air shows that the errors between the simulation results and theoretical calculations of the position deviations are less than 2‰ when the misalignments εx, εy, εx', εy' are in the range of [-0.5, 0.5] mm and [-0.5, 0.5]°.
Aperture averaging and BER for Gaussian beam in underwater oceanic turbulence
NASA Astrophysics Data System (ADS)
Gökçe, Muhsin Caner; Baykal, Yahya
2018-03-01
In an underwater wireless optical communication (UWOC) link, power fluctuations over finite-sized collecting lens are investigated for a horizontally propagating Gaussian beam wave. The power scintillation index, also known as the irradiance flux variance, for the received irradiance is evaluated in weak oceanic turbulence by using the Rytov method. This lets us further quantify the associated performance indicators, namely, the aperture averaging factor and the average bit-error rate (
Image denoising in mixed Poisson-Gaussian noise.
Luisier, Florian; Blu, Thierry; Unser, Michael
2011-03-01
We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...
2015-02-25
Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less
Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun
2011-07-07
In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.
Overcoming Sequence Misalignments with Weighted Structural Superposition
Khazanov, Nickolay A.; Damm-Ganamet, Kelly L.; Quang, Daniel X.; Carlson, Heather A.
2012-01-01
An appropriate structural superposition identifies similarities and differences between homologous proteins that are not evident from sequence alignments alone. We have coupled our Gaussian-weighted RMSD (wRMSD) tool with a sequence aligner and seed extension (SE) algorithm to create a robust technique for overlaying structures and aligning sequences of homologous proteins (HwRMSD). HwRMSD overcomes errors in the initial sequence alignment that would normally propagate into a standard RMSD overlay. SE can generate a corrected sequence alignment from the improved structural superposition obtained by wRMSD. HwRMSD’s robust performance and its superiority over standard RMSD are demonstrated over a range of homologous proteins. Its better overlay results in corrected sequence alignments with good agreement to HOMSTRAD. Finally, HwRMSD is compared to established structural alignment methods: FATCAT, SSM, CE, and Dalilite. Most methods are comparable at placing residue pairs within 2 Å, but HwRMSD places many more residue pairs within 1 Å, providing a clear advantage. Such high accuracy is essential in drug design, where small distances can have a large impact on computational predictions. This level of accuracy is also needed to correct sequence alignments in an automated fashion, especially for omics-scale analysis. HwRMSD can align homologs with low sequence identity and large conformational differences, cases where both sequence-based and structural-based methods may fail. The HwRMSD pipeline overcomes the dependency of structural overlays on initial sequence pairing and removes the need to determine the best sequence-alignment method, substitution matrix, and gap parameters for each unique pair of homologs. PMID:22733542
Bertholet, Jenny; Worm, Esben; Høyer, Morten; Poulsen, Per
2017-06-01
Accurate patient positioning is crucial in stereotactic body radiation therapy (SBRT) due to a high dose regimen. Cone-beam computed tomography (CBCT) is often used for patient positioning based on radio-opaque markers. We compared six CBCT-based set-up strategies with or without rotational correction. Twenty-nine patients with three implanted markers received 3-6 fraction liver SBRT. The markers were delineated on the mid-ventilation phase of a 4D-planning-CT. One pretreatment CBCT was acquired per fraction. Set-up strategy 1 used only translational correction based on manual marker match between the CBCT and planning CT. Set-up strategy 2 used automatic 6 degrees-of-freedom registration of the vertebrae closest to the target. The 3D marker trajectories were also extracted from the projections and the mean position of each marker was calculated and used for set-up strategies 3-6. Translational correction only was used for strategy 3. Translational and rotational corrections were used for strategies 4-6 with the rotation being either vertebrae based (strategy 4), or marker based and constrained to ±3° (strategy 5) or unconstrained (strategy 6). The resulting set-up error was calculated as the 3D root-mean-square set-up error of the three markers. The set-up error of the spinal cord was calculated for all strategies. The bony anatomy set-up (2) had the largest set-up error (5.8 mm). The marker-based set-up with unconstrained rotations (6) had the smallest set-up error (0.8 mm) but the largest spinal cord set-up error (12.1 mm). The marker-based set-up with translational correction only (3) or with bony anatomy rotational correction (4) had equivalent set-up error (1.3 mm) but rotational correction reduced the spinal cord set-up error from 4.1 mm to 3.5 mm. Marker-based set-up was substantially better than bony-anatomy set-up. Rotational correction may improve the set-up, but further investigations are required to determine the optimal correction strategy.
"Ser" and "Estar": Corrective Input to Children's Errors of the Spanish Copula Verbs
ERIC Educational Resources Information Center
Holtheuer, Carolina; Rendle-Short, Johanna
2013-01-01
Evidence for the role of corrective input as a facilitator of language acquisition is inconclusive. Studies show links between corrective input and grammatical use of some, but not other, language structures. The present study examined relationships between corrective parental input and children's errors in the acquisition of the Spanish copula…
Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity
ERIC Educational Resources Information Center
Simmons-Mackie, Nina; Damico, Jack S.
2008-01-01
Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…
Error-correcting codes on scale-free networks
NASA Astrophysics Data System (ADS)
Kim, Jung-Hoon; Ko, Young-Jo
2004-06-01
We investigate the potential of scale-free networks as error-correcting codes. We find that irregular low-density parity-check codes with the highest performance known to date have degree distributions well fitted by a power-law function p (k) ˜ k-γ with γ close to 2, which suggests that codes built on scale-free networks with appropriate power exponents can be good error-correcting codes, with a performance possibly approaching the Shannon limit. We demonstrate for an erasure channel that codes with a power-law degree distribution of the form p (k) = C (k+α)-γ , with k⩾2 and suitable selection of the parameters α and γ , indeed have very good error-correction capabilities.
Passive quantum error correction of linear optics networks through error averaging
NASA Astrophysics Data System (ADS)
Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.
2018-02-01
We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.
On the Discriminant Analysis in the 2-Populations Case
NASA Astrophysics Data System (ADS)
Rublík, František
2008-01-01
The empirical Bayes Gaussian rule, which in the normal case yields good values of the probability of total error, may yield high values of the maximum probability error. From this point of view the presented modified version of the classification rule of Broffitt, Randles and Hogg appears to be superior. The modification included in this paper is termed as a WR method, and the choice of its weights is discussed. The mentioned methods are also compared with the K nearest neighbours classification rule.
NASA Technical Reports Server (NTRS)
Warner, Joseph D.; Theofylaktos, Onoufrios
2012-01-01
A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.
Errata report on Herbert Goldstein's Classical Mechanics: Second edition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Unseren, M.A.; Hoffman, F.M.
This report describes errors in Herbert Goldstein's textbook Classical Mechanics, Second Edition (Copyright 1980, ISBN 0-201-02918-9). Some of the errors in current printings of the text were corrected in the second printing; however, after communicating with Addison Wesley, the publisher for Classical Mechanics, it was discovered that the corrected galley proofs had been lost by the printer and that no one had complained of any errors in the eleven years since the second printing. The errata sheet corrects errors from all printings of the second edition.
Entanglement renormalization, quantum error correction, and bulk causality
NASA Astrophysics Data System (ADS)
Kim, Isaac H.; Kastoryano, Michael J.
2017-04-01
Entanglement renormalization can be viewed as an encoding circuit for a family of approximate quantum error correcting codes. The logical information becomes progres-sively more well-protected against erasure errors at larger length scales. In particular, an approximate variant of holographic quantum error correcting code emerges at low energy for critical systems. This implies that two operators that are largely separated in scales behave as if they are spatially separated operators, in the sense that they obey a Lieb-Robinson type locality bound under a time evolution generated by a local Hamiltonian.
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy.
Boswell, Sarah A; Jeraj, Robert; Ruchala, Kenneth J; Olivera, Gustavo H; Jaradat, Hazim A; James, Joshua A; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T Rock
2005-06-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle.
Taylor, C; Parker, J; Stratford, J; Warren, M
2018-05-01
Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Development of a 3-D Pen Input Device
2008-09-01
of a unistroke which can be written on any surface or in the air while correcting integration errors from the...navigation frame of a unistroke, which can be written on any surface or in the air while correcting integration errors from the measurements of the IMU... be written on any surface or in the air while correcting integration errors from the measurements of the IMU (Inertial Measurement Unit) of the
ERIC Educational Resources Information Center
Rice, Bart F.; Wilde, Carroll O.
It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…
Quantum cryptography: individual eavesdropping with the knowledge of the error-correcting protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horoshko, D B
2007-12-31
The quantum key distribution protocol BB84 combined with the repetition protocol for error correction is analysed from the point of view of its security against individual eavesdropping relying on quantum memory. It is shown that the mere knowledge of the error-correcting protocol changes the optimal attack and provides the eavesdropper with additional information on the distributed key. (fifth seminar in memory of d.n. klyshko)
A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.
Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo
2016-01-01
In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.