Determination of suitable drying curve model for bread moisture loss during baking
NASA Astrophysics Data System (ADS)
Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.
2013-03-01
This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.
[Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].
Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling
2013-12-01
Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Marini, J. W.
1977-01-01
A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.
A network application for modeling a centrifugal compressor performance map
NASA Astrophysics Data System (ADS)
Nikiforov, A.; Popova, D.; Soldatova, K.
2017-08-01
The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.
Flood loss model transfer: on the value of additional data
NASA Astrophysics Data System (ADS)
Schröter, Kai; Lüdtke, Stefan; Vogel, Kristin; Kreibich, Heidi; Thieken, Annegret; Merz, Bruno
2017-04-01
The transfer of models across geographical regions and flood events is a key challenge in flood loss estimation. Variations in local characteristics and continuous system changes require regional adjustments and continuous updating with current evidence. However, acquiring data on damage influencing factors is expensive and therefore assessing the value of additional data in terms of model reliability and performance improvement is of high relevance. The present study utilizes empirical flood loss data on direct damage to residential buildings available from computer aided telephone interviews that were carried out after the floods in 2002, 2005, 2006, 2010, 2011 and 2013 mainly in the Elbe and Danube catchments in Germany. Flood loss model performance is assessed for incrementally increased numbers of loss data which are differentiated according to region and flood event. Two flood loss modeling approaches are considered: (i) a multi-variable flood loss model approach using Random Forests and (ii) a uni-variable stage damage function. Both model approaches are embedded in a bootstrapping process which allows evaluating the uncertainty of model predictions. Predictive performance of both models is evaluated with regard to mean bias, mean absolute and mean squared errors, as well as hit rate and sharpness. Mean bias and mean absolute error give information about the accuracy of model predictions; mean squared error and sharpness about precision and hit rate is an indicator for model reliability. The results of incremental, regional and temporal updating demonstrate the usefulness of additional data to improve model predictive performance and increase model reliability, particularly in a spatial-temporal transfer setting.
Symbol signal-to-noise ratio loss in square-wave subcarrier downconversion
NASA Technical Reports Server (NTRS)
Feria, Y.; Statman, J.
1993-01-01
This article presents the simulated results of the signal-to-noise ratio (SNR) loss in the process of a square-wave subcarrier down conversion. In a previous article, the SNR degradation was evaluated at the output of the down converter based on the signal and noise power change. Unlike in the previous article, the SNR loss is defined here as the difference between the actual and theoretical symbol SNR's for the same symbol-error rate at the output of the symbol matched filter. The results show that an average SNR loss of 0.3 dB can be achieved with tenth-order infinite impulse response (IIR) filters. This loss is a 0.2-dB increase over the SNR degradation in the previous analysis where neither the signal distortion nor the symbol detector was considered.
Super-linear Precision in Simple Neural Population Codes
NASA Astrophysics Data System (ADS)
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
Program documentation: Surface heating rate of thin skin models (THNSKN)
NASA Technical Reports Server (NTRS)
Mcbryde, J. D.
1975-01-01
Program THNSKN computes the mean heating rate at a maximum of 100 locations on the surface of thin skin transient heating rate models. Output is printed in tabular form and consists of time history tabulation of temperatures, average temperatures, heat loss without conduction correction, mean heating rate, least squares heating rate, and the percent standard error of the least squares heating rates. The input tape used is produced by the program EHTS03.
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi
2017-07-01
In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).
The in vivo wear resistance of 12 composite resins.
Lang, B R; Bloem, T J; Powers, J M; Wang, R F
1992-09-01
The in vivo wear resistance of 12 composite resins were compared with an amalgam control using the Latin Square experimental design. Sixteen edentulous patients wearing specially designed complete dentures formed the experimental population. The Michigan Computer Graphics Measurement System was used to digitize the surface of the control and composite resin samples before and after 3-month test periods to obtain wear data. The 12 composite resins selected for this investigation based on their published composite classification types were seven fine particle composites, three blends, and two microfilled composite resins. The Latin Square experimental design was found to be valid with the factor of material being statistically different at the 5% level of significance. Wear was computed as volume loss (mm3/mm2), and all of the composites studied had more wear than the amalgam control (P = .001). After 3 months, the mean (error) of wear of the amalgam was 0.028 (0.006). Means (error) of wear for the 12 composites were ranked from most to least wear by mean wear volume loss. The absence of any relationship between mean wear volume loss and the volume percentage filler was confirmed by the correlation coefficient r = -0.158.
Comparison of estimators of standard deviation for hydrologic time series
Tasker, Gary D.; Gilroy, Edward J.
1982-01-01
Unbiasing factors as a function of serial correlation, ρ, and sample size, n for the sample standard deviation of a lag one autoregressive model were generated by random number simulation. Monte Carlo experiments were used to compare the performance of several alternative methods for estimating the standard deviation σ of a lag one autoregressive model in terms of bias, root mean square error, probability of underestimation, and expected opportunity design loss. Three methods provided estimates of σ which were much less biased but had greater mean square errors than the usual estimate of σ: s = (1/(n - 1) ∑ (xi −x¯)2)½. The three methods may be briefly characterized as (1) a method using a maximum likelihood estimate of the unbiasing factor, (2) a method using an empirical Bayes estimate of the unbiasing factor, and (3) a robust nonparametric estimate of σ suggested by Quenouille. Because s tends to underestimate σ, its use as an estimate of a model parameter results in a tendency to underdesign. If underdesign losses are considered more serious than overdesign losses, then the choice of one of the less biased methods may be wise.
Fillmore, Parley D; Sutphin, John E; Goins, Kenneth M
2010-06-01
To report the visual acuity, refractive outcome, and endothelial cell density (ECD) up to 1 year after deep lamellar endothelial keratoplasty (DLEK) in a large prospective series. Eighty-six DLEK procedures were performed and evaluated in a prospective interventional case series. Subgroup analysis was performed to compare results from large-incision (9 mm) DLEK (n = 7), small-incision (5-8 mm) DLEK (n = 70), and penetrating keratoplasty (PKP) conversion (n = 9). Outcome measures included best-corrected visual acuity (BCVA), manifest refraction, corneal topographic astigmatism, and ECD. The percentage of eyes that achieved a BCVA of 20/40 or better after DLEK was 55% at 6 months, increasing to 61% at 1 year. Topographic astigmatism and spherical equivalent were not significantly different than preoperative measurements up to 1 year after DLEK (P > 0.05). An endothelial cell loss of 40% at 6 months and 48% by 1 year was observed. The mean ECD after DLEK was 1831 +/- 472 cells per square millimeter at 6 months and 1569 +/- 601 cells per square millimeter at 12 months. When evaluated by incision size, the ECD was better at 2066 +/- 558 cells per square millimeter with a 9-mm incision compared with only 1516 +/- 585 cells per square millimeter with a smaller incision at 1 year, although this did not reach significance (P = 0.075). The endothelial cell loss after penetrating keratoplasty conversion was similar to that in the large-incision group (P > 0.05). DLEK provides good visual acuity (> or =20/40) for the majority of patients at 1 year with stable refractive error compared with baseline. Refractive stability was observed with both large- and small-incision DLEKs; however, worrisome endothelial cell loss was observed, especially with a small-incision technique.
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P.; Marini, J.
1979-01-01
The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.
Davidov, Ori; Rosen, Sophia
2011-04-01
In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.
Saleem, Muhammad; Sharif, Kashif; Fahmi, Aliya
2018-04-27
Applications of Pareto distribution are common in reliability, survival and financial studies. In this paper, A Pareto mixture distribution is considered to model a heterogeneous population comprising of two subgroups. Each of two subgroups is characterized by the same functional form with unknown distinct shape and scale parameters. Bayes estimators have been derived using flat and conjugate priors using squared error loss function. Standard errors have also been derived for the Bayes estimators. An interesting feature of this study is the preparation of components of Fisher Information matrix.
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Wang, Xuchen
2016-02-01
Hyperspectral estimation of soil organic matter (SOM) in coal mining regions is an important tool for enhancing fertilization in soil restoration programs. The correlation--partial least squares regression (PLSR) method effectively solves the information loss problem of correlation--multiple linear stepwise regression, but results of the correlation analysis must be optimized to improve precision. This study considers the relationship between spectral reflectance and SOM based on spectral reflectance curves of soil samples collected from coal mining regions. Based on the major absorption troughs in the 400-1006 nm spectral range, PLSR analysis was performed using 289 independent bands of the second derivative (SDR) with three levels and measured SOM values. A wavelet-correlation-PLSR (W-C-PLSR) model was then constructed. By amplifying useful information that was previously obscured by noise, the W-C-PLSR model was optimal for estimating SOM content, with smaller prediction errors in both calibration (R(2) = 0.970, root mean square error (RMSEC) = 3.10, and mean relative error (MREC) = 8.75) and validation (RMSEV = 5.85 and MREV = 14.32) analyses, as compared with other models. Results indicate that W-C-PLSR has great potential to estimate SOM in coal mining regions.
Engdahl, N.B.; Vogler, E.T.; Weissmann, G.S.
2010-01-01
River-aquifer exchange is considered within a transition probability framework along the Rio Grande in Albuquerque, New Mexico, to provide a stochastic estimate of aquifer heterogeneity and river loss. Six plausible hydrofacies configurations were determined using categorized drill core and wetland survey data processed through the TPROGS geostatistical package. A base case homogeneous model was also constructed for comparison. River loss was simulated for low, moderate, and high Rio Grande stages and several different riverside drain stage configurations. Heterogeneity effects were quantified by determining the mean and variance of the K field for each realization compared to the root-mean-square (RMS) error of the observed groundwater head data. Simulation results showed that the heterogeneous models produced smaller estimates of loss than the homogeneous approximation. Differences between heterogeneous and homogeneous model results indicate that the use of a homogeneous K in a regional-scale model may result in an overestimation of loss but comparable RMS error. We find that the simulated river loss is dependent on the aquifer structure and is most sensitive to the volumetric proportion of fines within the river channel. Copyright 2010 by the American Geophysical Union.
A side-by-side comparison of CPV module and system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew; Marion, Bill; Kurtz, Sarah
A side-by-side comparison is made between concentrator photovoltaic module and system direct current aperture efficiency data with a focus on quantifying system performance losses. The individual losses measured/calculated, when combined, are in good agreement with the total loss seen between the module and the system. Results indicate that for the given test period, the largest individual loss of 3.7% relative is due to the baseline performance difference between the individual module and the average for the 200 modules in the system. A basic empirical model is derived based on module spectral performance data and the tabulated losses between the modulemore » and the system. The model predicts instantaneous system direct current aperture efficiency with a root mean square error of 2.3% relative.« less
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Rakkiyappan, R; Sakthivel, N; Cao, Jinde
2015-06-01
This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen's inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially mean-square stable are derived. When applying Jensen's inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim
2016-11-01
In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.
Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties
NASA Astrophysics Data System (ADS)
Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing
This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
Influence of modulation frequency in rubidium cell frequency standards
NASA Technical Reports Server (NTRS)
Audoin, C.; Viennet, J.; Cyr, N.; Vanier, J.
1983-01-01
The error signal which is used to control the frequency of the quartz crystal oscillator of a passive rubidium cell frequency standard is considered. The value of the slope of this signal, for an interrogation frequency close to the atomic transition frequency is calculated and measured for various phase (or frequency) modulation waveforms, and for several values of the modulation frequency. A theoretical analysis is made using a model which applies to a system in which the optical pumping rate, the relaxation rates and the RF field are homogeneous. Results are given for sine-wave phase modulation, square-wave frequency modulation and square-wave phase modulation. The influence of the modulation frequency on the slope of the error signal is specified. It is shown that the modulation frequency can be chosen as large as twice the non-saturated full-width at half-maximum without a drastic loss of the sensitivity to an offset of the interrogation frequency from center line, provided that the power saturation factor and the amplitude of modulation are properly adjusted.
Channel estimation in few mode fiber mode division multiplexing transmission system
NASA Astrophysics Data System (ADS)
Hei, Yongqiang; Li, Li; Li, Wentao; Li, Xiaohui; Shi, Guangming
2018-03-01
It is abundantly clear that obtaining the channel state information (CSI) is of great importance for the equalization and detection in coherence receivers. However, to the best of the authors' knowledge, in most of the existing literatures, CSI is assumed to be perfectly known at the receiver. So far, few literature discusses the effects of imperfect CSI on MDM system performance caused by channel estimation. Motivated by that, in this paper, the channel estimation in few mode fiber (FMF) mode division multiplexing (MDM) system is investigated, in which two classical channel estimation methods, i.e., least square (LS) method and minimum mean square error (MMSE) method, are discussed with the assumption of the spatially white noise lumped at the receiver side of MDM system. Both the capacity and BER performance of MDM system affected by mode-dependent gain or loss (MDL) with different channel estimation errors have been studied. Simulation results show that the capacity and BER performance can be further deteriorated in MDM system by the channel estimation, and an 1e-3 variance of channel estimation error is acceptable in MDM system with 0-6 dB MDL values.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons
1989-03-25
error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given
Calculating sediment discharge from a highway construction site in central Pennsylvania
Reed, L.A.; Ward, J.R.; Wetzel, K.L.
1985-01-01
The Pennsylvania Department of Transportation, the Federal Highway Administration, and the U.S. Geological Survey have cooperated in a study to evaluate two methods of predicting sediment yields during highway construction. Sediment yields were calculated using the Universal Soil Loss and the Younkin Sediment Prediction Equations. Results were compared to the actual measured values, and standard errors and coefficients of correlation were calculated. Sediment discharge from the construction area was determined for storms that occurred during construction of Interstate 81 in a 0.38-square mile basin near Harrisburg, Pennsylvania. Precipitation data tabulated included total rainfall, maximum 30-minute rainfall, kinetic energy, and the erosive index of the precipitation. Highway construction data tabulated included the area disturbed by clearing and grubbing, the area in cuts and fills, the average depths of cuts and fills, the area seeded and mulched, and the area paved. Using the Universal Soil Loss Equation, sediment discharge from the construction area was calculated for storms. The standard error of estimate was 0.40 (about 105 percent), and the coefficient of correlation was 0.79. Sediment discharge from the construction area was also calculated using the Younkin Equation. The standard error of estimate of 0.42 (about 110 percent), and the coefficient of correlation of 0.77 are comparable to those from the Universal Soil Loss Equation.
Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter
2018-03-20
Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.
Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize
2018-06-01
The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.
Wilberg, Dale E.; Stolp, Bernard J.
2005-01-01
This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
An analysis of the least-squares problem for the DSN systematic pointing error model
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1991-01-01
A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.
Validation of Multiple Tools for Flat Plate Photovoltaic Modeling Against Measured Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, J.; Whitmore, J.; Blair, N.
2014-08-01
This report expands upon a previous work by the same authors, published in the 40th IEEE Photovoltaic Specialists conference. In this validation study, comprehensive analysis is performed on nine photovoltaic systems for which NREL could obtain detailed performance data and specifications, including three utility-scale systems and six commercial scale systems. Multiple photovoltaic performance modeling tools were used to model these nine systems, and the error of each tool was analyzed compared to quality-controlled measured performance data. This study shows that, excluding identified outliers, all tools achieve annual errors within +/-8% and hourly root mean squared errors less than 7% formore » all systems. It is further shown using SAM that module model and irradiance input choices can change the annual error with respect to measured data by as much as 6.6% for these nine systems, although all combinations examined still fall within an annual error range of +/-8.5%. Additionally, a seasonal variation in monthly error is shown for all tools. Finally, the effects of irradiance data uncertainty and the use of default loss assumptions on annual error are explored, and two approaches to reduce the error inherent in photovoltaic modeling are proposed.« less
Response Surface Analysis of Experiments with Random Blocks
1988-09-01
partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hongbin
2016-04-15
The potential use of feature wavelengths for predicting drip loss in grass carp fish, as affected by being frozen at -20°C for 24 h and thawed at 4°C for 1, 2, 4, and 6 days, was investigated. Hyperspectral images of frozen-thawed fish were obtained and their corresponding spectra were extracted. Least-squares support vector machine and multiple linear regression (MLR) models were established using five key wavelengths, selected by combining a genetic algorithm and successive projections algorithm, and this showed satisfactory performance in drip loss prediction. The MLR model with a determination coefficient of prediction (R(2)P) of 0.9258, and lower root mean square error estimated by a prediction (RMSEP) of 1.12%, was applied to transfer each pixel of the image and generate the distribution maps of exudation changes. The results confirmed that it is feasible to identify the feature wavelengths using variable selection methods and chemometric analysis for developing on-line multispectral imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chemical library subset selection algorithms: a unified derivation using spatial statistics.
Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F
2002-01-01
If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
NASA Technical Reports Server (NTRS)
Bert, C. W.; Chang, S.
1972-01-01
Elastic and damping analyses resulting in determinations of the various stiffnesses and associated loss tangents for the complete characterization of the elastic and damping behavior of a monofilament composite layer are presented. For the determination of the various stiffnesses, either an elementary mechanics-of-materials formulation or a more rigorous mixed-boundary-value elasticity formulation is used. The solution for the latter formulation is obtained by means of the boundary-point least-square error technique. Kimball-Lovell type damping is assumed for each of the constituent materials. For determining the loss tangents associated with the various stiffnesses, either the viscoelastic correspondence principle or an energy analysis based on the appropriate elastic stress distribution is used.
Advanced Receiver tracking of Voyager 2 near solar conjunction
NASA Technical Reports Server (NTRS)
Brown, D. H.; Hurd, W. J.; Vilnrotter, V. A.; Wiggins, J. D.
1988-01-01
The Advanced Receiver (ARX) was used to track the Voyager 2 spacecraft at low Sun-Earth-Probe (SEP) angles near solar conjunction in December of 1987. The received carrier signal exhibited strong fluctuations in both phase and amplitude. The ARX used spectral estimation and mathematical modeling of the phase and receiver noise processes to set an optimum carrier tracking bandwidth. This minimized the mean square phase error in tracking carrier phase and thus minimized the loss in the telemetry signal-to-noise ratio due to the carrier loop. Recovered symbol SNRs and errors in decoded engineering data for the ARX are compared with those for the current Block 3 telemetry stream. Optimum bandwidths are plotted against SEP angle. Measurements of the power spectral density of the solar phase and amplitude fluctuations are also given.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
2016-09-01
mean- square (RMS) error of 0.29° at ə° resolution. For a P4 coded signal, the RMS error in estimating the AOA is 0.32° at 1° resolution. 14...FMCW signal, it was demonstrated that the system is capable of estimating the AOA with a root-mean- square (RMS) error of 0.29° at ə° resolution. For a...Modulator PCB printed circuit board PD photodetector RF radio frequency RMS root-mean- square xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii
Correcting Too Much or Too Little? The Performance of Three Chi-Square Corrections.
Foldnes, Njål; Olsson, Ulf Henning
2015-01-01
This simulation study investigates the performance of three test statistics, T1, T2, and T3, used to evaluate structural equation model fit under non normal data conditions. T1 is the well-known mean-adjusted statistic of Satorra and Bentler. T2 is the mean-and-variance adjusted statistic of Sattertwaithe type where the degrees of freedom is manipulated. T3 is a recently proposed version of T2 that does not manipulate degrees of freedom. Discrepancies between these statistics and their nominal chi-square distribution in terms of errors of Type I and Type II are investigated. All statistics are shown to be sensitive to increasing kurtosis in the data, with Type I error rates often far off the nominal level. Under excess kurtosis true models are generally over-rejected by T1 and under-rejected by T2 and T3, which have similar performance in all conditions. Under misspecification there is a loss of power with increasing kurtosis, especially for T2 and T3. The coefficient of variation of the nonzero eigenvalues of a certain matrix is shown to be a reliable indicator for the adequacy of these statistics.
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
ERIC Educational Resources Information Center
Stanley, Julian C.; Livingston, Samuel A.
Besides the ubiquitous Pearson product-moment r, there are a number of other measures of relationship that are attenuated by errors of measurement and for which the relationship between true measures can be estimated. Among these are the correlation ratio (eta squared), Kelley's unbiased correlation ratio (epsilon squared), Hays' omega squared,…
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas
Lindgren, R.J.
2006-01-01
A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.
Pilot age and error in air taxi crashes.
Rebok, George W; Qiang, Yandong; Baker, Susan P; Li, Guohua
2009-07-01
The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air taxi crashes. Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Of the 1751 air taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air taxi crashes. Lack of age-related differences in pilot error may be attributable to the "safe worker effect."
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Error quantification of abnormal extreme high waves in Operational Oceanographic System in Korea
NASA Astrophysics Data System (ADS)
Jeong, Sang-Hun; Kim, Jinah; Heo, Ki-Young; Park, Kwang-Soon
2017-04-01
In winter season, large-height swell-like waves have occurred on the East coast of Korea, causing property damages and loss of human life. It is known that those waves are generated by a local strong wind made by temperate cyclone moving to eastward in the East Sea of Korean peninsula. Because the waves are often occurred in the clear weather, in particular, the damages are to be maximized. Therefore, it is necessary to predict and forecast large-height swell-like waves to prevent and correspond to the coastal damages. In Korea, an operational oceanographic system (KOOS) has been developed by the Korea institute of ocean science and technology (KIOST) and KOOS provides daily basis 72-hours' ocean forecasts such as wind, water elevation, sea currents, water temperature, salinity, and waves which are computed from not only meteorological and hydrodynamic model (WRF, ROMS, MOM, and MOHID) but also wave models (WW-III and SWAN). In order to evaluate the model performance and guarantee a certain level of accuracy of ocean forecasts, a Skill Assessment (SA) system was established as a one of module in KOOS. It has been performed through comparison of model results with in-situ observation data and model errors have been quantified with skill scores. Statistics which are used in skill assessment are including a measure of both errors and correlations such as root-mean-square-error (RMSE), root-mean-square-error percentage (RMSE%), mean bias (MB), correlation coefficient (R), scatter index (SI), circular correlation (CC) and central frequency (CF) that is a frequency with which errors lie within acceptable error criteria. It should be utilized simultaneously not only to quantify an error but also to improve an accuracy of forecasts by providing a feedback interactively. However, in an abnormal phenomena such as high-height swell-like waves in the East coast of Korea, it requires more advanced and optimized error quantification method that allows to predict the abnormal waves well and to improve the accuracy of forecasts by supporting modification of physics and numeric on numerical models through sensitivity test. In this study, we proposed an appropriate method of error quantification especially on abnormal high waves which are occurred by local weather condition. Furthermore, we introduced that how the quantification errors are contributed to improve wind-wave modeling by applying data assimilation and utilizing reanalysis data.
Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He
2013-09-01
In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
mBEEF-vdW: Robust fitting of error estimation density functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
mBEEF-vdW: Robust fitting of error estimation density functionals
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; ...
2016-06-15
Here, we propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator overmore » the training datasets. Using this estimator, we show that the robust loss function leads to a 10% improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.« less
Fowler, Stephanie M; Schmidt, Heinar; van de Ven, Remy; Wynn, Peter; Hopkins, David L
2014-12-01
A Raman spectroscopic hand held device was used to predict shear force (SF) of 80 fresh lamb m. longissimus lumborum (LL) at 1 and 5days post mortem (PM). Traditional predictors of SF including sarcomere length (SL), particle size (PS), cooking loss (CL), percentage myofibrillar breaks and pH were also measured. SF values were regressed against Raman spectra using partial least squares regression and against the traditional predictors using linear regression. The best prediction of shear force values used spectra at 1day PM to predict shear force at 1day which gave a root mean square error of prediction (RMSEP) of 13.6 (Null=14.0) and the R(2) between observed and cross validated predicted values was 0.06 (R(2)cv). Overall, for fresh LL, the predictability SF, by either the Raman hand held probe or traditional predictors was low. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Survey of Terrain Modeling Technologies and Techniques
2007-09-01
Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera
NASA Astrophysics Data System (ADS)
Rahman, Samiur; Ullah, Sana; Ullah, Sehat
2018-01-01
Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
NASA Astrophysics Data System (ADS)
Batra, Arun; Zeidler, James R.; Beex, A. A. Louis
2007-12-01
It has previously been shown that a least-mean-square (LMS) decision-feedback filter can mitigate the effect of narrowband interference (L.-M. Li and L. Milstein, 1983). An adaptive implementation of the filter was shown to converge relatively quickly for mild interference. It is shown here, however, that in the case of severe narrowband interference, the LMS decision-feedback equalizer (DFE) requires a very large number of training symbols for convergence, making it unsuitable for some types of communication systems. This paper investigates the introduction of an LMS prediction-error filter (PEF) as a prefilter to the equalizer and demonstrates that it reduces the convergence time of the two-stage system by as much as two orders of magnitude. It is also shown that the steady-state bit-error rate (BER) performance of the proposed system is still approximately equal to that attained in steady-state by the LMS DFE-only. Finally, it is shown that the two-stage system can be implemented without the use of training symbols. This two-stage structure lowers the complexity of the overall system by reducing the number of filter taps that need to be adapted, while incurring a slight loss in the steady-state BER.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.
Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki
2014-01-01
Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Y.
1983-01-01
Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.
Pilot Age and Error in Air-Taxi Crashes
Rebok, George W.; Qiang, Yandong; Baker, Susan P.; Li, Guohua
2010-01-01
Introduction The associations of pilot error with the type of flight operations and basic weather conditions are well documented. The correlation between pilot characteristics and error is less clear. This study aims to examine whether pilot age is associated with the prevalence and patterns of pilot error in air-taxi crashes. Methods Investigation reports from the National Transportation Safety Board for crashes involving non-scheduled Part 135 operations (i.e., air taxis) in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Crash circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Results Of the 1751 air-taxi crashes studied, 28% resulted from mechanical failure, 25% from loss of control at landing or takeoff, 7% from visual flight rule conditions into instrument meteorological conditions, 7% from fuel starvation, 5% from taxiing, and 28% from other causes. Crashes among older pilots were more likely to occur during the daytime rather than at night and off airport than on airport. The patterns of pilot error in air-taxi crashes were similar across age groups. Of the errors identified, 27% were flawed decisions, 26% were inattentiveness, 23% mishandled aircraft kinetics, 15% mishandled wind and/or runway conditions, and 11% were others. Conclusions Pilot age is associated with crash circumstances but not with the prevalence and patterns of pilot error in air-taxi crashes. Lack of age-related differences in pilot error may be attributable to the “safe worker effect.” PMID:19601508
Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.
2012-04-01
Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less
Three filters for visualization of phase objects with large variations of phase gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagan, Arkadiusz; Antosiewicz, Tomasz J.; Szoplik, Tomasz
2009-02-20
We propose three amplitude filters for visualization of phase objects. They interact with the spectra of pure-phase objects in the frequency plane and are based on tangent and error functions as well as antisymmetric combination of square roots. The error function is a normalized form of the Gaussian function. The antisymmetric square-root filter is composed of two square-root filters to widen its spatial frequency spectral range. Their advantage over other known amplitude frequency-domain filters, such as linear or square-root graded ones, is that they allow high-contrast visualization of objects with large variations of phase gradients.
Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction.
Heintz, Brittany D; Keenan, Kevin G
2018-01-01
Dexterity impairments are well documented in older adults, though it is unclear how these influence touchscreen manipulation. This study examined age-related differences while tracing on high- and low-friction touchscreens using the finger or stylus. 26 young and 24 older adults completed an Archimedes spiral tracing task on a touchscreen mounted on a force sensor. Root mean square error was calculated to quantify performance. Root mean square error increased by 29.9% for older vs. young adults using the fingertip, but was similar to young adults when using the stylus. Although other variables (e.g., touchscreen usage, sensation, and reaction time) differed between age groups, these variables were not related to increased error in older adults while using their fingertip. Root mean square error also increased on the low-friction surface for all subjects. These findings suggest that utilizing a stylus and increasing surface friction may improve touchscreen use in older adults.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
mBEEF-vdW: Robust fitting of error estimation density functionals
NASA Astrophysics Data System (ADS)
Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas
2016-06-01
We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.
NASA Astrophysics Data System (ADS)
Saathoff, Jonathan
2018-04-01
Dissipative Particle Dynamics (DPD) provides a tool for studying phase behavior and interfacial phenomena for complex mixtures and macromolecules. Methods to quickly and automatically parameterize DPD greatly increase its effectiveness. One such method is to map predicted activity coefficients derived from COSMO-SAC onto DPD parameter sets. However, there are serious limitations to the accuracy of this mapping, including the inability of single DPD beads to reproduce asymmetric infinite dilution activity coefficients, the loss of precision when reusing parameters for different molecular fragments, and the error due to bonding beads together. This report describes these effects in quantitative detail and provides methods to mitigate much of their deleterious effects. This includes a novel approach to remove errors caused by bonding DPD beads together. Using these methods, logarithm hexane/water partition coefficients were calculated for 61 molecules. The root mean-squared error for these calculations was determined to be 0.14—a very low value—with respect to the final mapping procedure. Cognizance of the above limitations can greatly enhance the predictive power of DPD.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Some Results on Mean Square Error for Factor Score Prediction
ERIC Educational Resources Information Center
Krijnen, Wim P.
2006-01-01
For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…
Weighted linear regression using D2H and D2 as the independent variables
Hans T. Schreuder; Michael S. Williams
1998-01-01
Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...
ERIC Educational Resources Information Center
Savalei, Victoria
2012-01-01
The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Pilot error in air carrier accidents: does age matter?
Li, Guohua; Grabowski, Jurek G; Baker, Susan P; Rebok, George W
2006-07-01
The relationship between pilot age and safety performance has been the subject of research and controversy since the "Age 60 Rule" became effective in 1960. This study aimed to examine age-related differences in the prevalence and patterns of pilot error in air carrier accidents. Investigation reports from the National Transportation Safety Board for accidents involving Part 121 operations in the United States between 1983 and 2002 were reviewed to identify pilot error and other contributing factors. Accident circumstances and the presence and type of pilot error were analyzed in relation to pilot age using Chi-square tests. Of the 558 air carrier accidents studied, 25% resulted from turbulence, 21% from mechanical failure, 16% from taxiing events, 13% from loss of control at landing or takeoff, and 25% from other causes. Accidents involving older pilots were more likely to be caused by turbulence, whereas accidents involving younger pilots were more likely to be taxiing events. Pilot error was a contributing factor in 34%, 38%, 35%, and 34% of the accidents involving pilots ages 25-34 yr, 35-44 yr, 45-54 yr, and 55-59 yr, respectively (p = 0.87). The patterns of pilot error were similar across age groups. Overall, 26% of the pilot errors identified were inattentiveness, 22% flawed decisions, 22% mishandled aircraft kinetics, and 11% poor crew interactions. The prevalence and patterns of pilot error in air carrier accidents do not seem to change with pilot age. The lack of association between pilot age and error may be due to the "safe worker effect" resulting from the rigorous selection processes and certification standards for professional pilots.
Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni; Montagnese, Sara
2017-01-01
A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints.
Longato, Enrico; Garrido, Maria; Saccardo, Desy; Montesinos Guevara, Camila; Mani, Ali R.; Bolognesi, Massimo; Amodio, Piero; Facchinetti, Andrea; Sparacino, Giovanni
2017-01-01
A popular method to estimate proximal/distal temperature (TPROX and TDIST) consists in calculating a weighted average of nine wireless sensors placed on pre-defined skin locations. Specifically, TPROX is derived from five sensors placed on the infra-clavicular and mid-thigh area (left and right) and abdomen, and TDIST from four sensors located on the hands and feet. In clinical practice, the loss/removal of one or more sensors is a common occurrence, but limited information is available on how this affects the accuracy of temperature estimates. The aim of this study was to determine the accuracy of temperature estimates in relation to number/position of sensors removed. Thirteen healthy subjects wore all nine sensors for 24 hours and reference TPROX and TDIST time-courses were calculated using all sensors. Then, all possible combinations of reduced subsets of sensors were simulated and suitable weights for each sensor calculated. The accuracy of TPROX and TDIST estimates resulting from the reduced subsets of sensors, compared to reference values, was assessed by the mean squared error, the mean absolute error (MAE), the cross-validation error and the 25th and 75th percentiles of the reconstruction error. Tables of the accuracy and sensor weights for all possible combinations of sensors are provided. For instance, in relation to TPROX, a subset of three sensors placed in any combination of three non-homologous areas (abdominal, right or left infra-clavicular, right or left mid-thigh) produced an error of 0.13°C MAE, while the loss/removal of the abdominal sensor resulted in an error of 0.25°C MAE, with the greater impact on the quality of the reconstruction. This information may help researchers/clinicians: i) evaluate the expected goodness of their TPROX and TDIST estimates based on the number of available sensors; ii) select the most appropriate subset of sensors, depending on goals and operational constraints. PMID:28666029
Liu, Zhijian; Li, Hao; Tang, Xindong; Zhang, Xinyu; Lin, Fan; Cheng, Kewei
2016-01-01
Heat collection rate and heat loss coefficient are crucial indicators for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, the direct determination requires complex detection devices and a series of standard experiments, wasting too much time and manpower. To address this problem, we previously used artificial neural networks and support vector machine to develop precise knowledge-based models for predicting the heat collection rates and heat loss coefficients of water-in-glass evacuated tube solar water heaters, setting the properties measured by "portable test instruments" as the independent variables. A robust software for determination was also developed. However, in previous results, the prediction accuracy of heat loss coefficients can still be improved compared to those of heat collection rates. Also, in practical applications, even a small reduction in root mean square errors (RMSEs) can sometimes significantly improve the evaluation and business processes. As a further study, in this short report, we show that using a novel and fast machine learning algorithm-extreme learning machine can generate better predicted results for heat loss coefficient, which reduces the average RMSEs to 0.67 in testing.
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
A risk-based prospective payment system that integrates patient, hospital and national costs.
Siegel, C; Jones, K; Laska, E; Meisner, M; Lin, S
1992-05-01
We suggest that a desirable form for prospective payment for inpatient care is hospital average cost plus a linear combination of individual patient and national average cost. When the coefficients are chosen to minimize mean squared error loss between payment and costs, the payment has efficiency and access incentives. The coefficient multiplying patient costs is a hospital specific measure of financial risk of the patient. Access is promoted since providers receive higher reimbursements for risky, high cost patients. Historical cost data can be used to obtain estimates of payment parameters. The method is applied to Medicare data on psychiatric inpatients.
Optimal estimation of the optomechanical coupling strength
NASA Astrophysics Data System (ADS)
Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André
2018-06-01
We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.
Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research.
ERIC Educational Resources Information Center
Levine, Timothy R.; Hullett, Craig R.
2002-01-01
Alerts communication researchers to potential errors stemming from the use of SPSS (Statistical Package for the Social Sciences) to obtain estimates of eta squared in analysis of variance (ANOVA). Strives to clarify issues concerning the development and appropriate use of eta squared and partial eta squared in ANOVA. Discusses the reporting of…
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
Validating Clusters with the Lower Bound for Sum-of-Squares Error
ERIC Educational Resources Information Center
Steinley, Douglas
2007-01-01
Given that a minor condition holds (e.g., the number of variables is greater than the number of clusters), a nontrivial lower bound for the sum-of-squares error criterion in K-means clustering is derived. By calculating the lower bound for several different situations, a method is developed to determine the adequacy of cluster solution based on…
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
Lim, Jun-Seok; Pang, Hee-Suk
2016-01-01
In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.
Al-Samman, A. M.; Rahman, T. A.; Azmi, M. H.; Hindia, M. N.; Khan, I.; Hanafi, E.
2016-01-01
This paper presents an experimental characterization of millimeter-wave (mm-wave) channels in the 6.5 GHz, 10.5 GHz, 15 GHz, 19 GHz, 28 GHz and 38 GHz frequency bands in an indoor corridor environment. More than 4,000 power delay profiles were measured across the bands using an omnidirectional transmitter antenna and a highly directional horn receiver antenna for both co- and cross-polarized antenna configurations. This paper develops a new path-loss model to account for the frequency attenuation with distance, which we term the frequency attenuation (FA) path-loss model and introduce a frequency-dependent attenuation factor. The large-scale path loss was characterized based on both new and well-known path-loss models. A general and less complex method is also proposed to estimate the cross-polarization discrimination (XPD) factor of close-in reference distance with the XPD (CIX) and ABG with the XPD (ABGX) path-loss models to avoid the computational complexity of minimum mean square error (MMSE) approach. Moreover, small-scale parameters such as root mean square (RMS) delay spread, mean excess (MN-EX) delay, dispersion factors and maximum excess (MAX-EX) delay parameters were used to characterize the multipath channel dispersion. Multiple statistical distributions for RMS delay spread were also investigated. The results show that our proposed models are simpler and more physically-based than other well-known models. The path-loss exponents for all studied models are smaller than that of the free-space model by values in the range of 0.1 to 1.4 for all measured frequencies. The RMS delay spread values varied between 0.2 ns and 13.8 ns, and the dispersion factor values were less than 1 for all measured frequencies. The exponential and Weibull probability distribution models best fit the RMS delay spread empirical distribution for all of the measured frequencies in all scenarios. PMID:27654703
Al-Samman, A M; Rahman, T A; Azmi, M H; Hindia, M N; Khan, I; Hanafi, E
This paper presents an experimental characterization of millimeter-wave (mm-wave) channels in the 6.5 GHz, 10.5 GHz, 15 GHz, 19 GHz, 28 GHz and 38 GHz frequency bands in an indoor corridor environment. More than 4,000 power delay profiles were measured across the bands using an omnidirectional transmitter antenna and a highly directional horn receiver antenna for both co- and cross-polarized antenna configurations. This paper develops a new path-loss model to account for the frequency attenuation with distance, which we term the frequency attenuation (FA) path-loss model and introduce a frequency-dependent attenuation factor. The large-scale path loss was characterized based on both new and well-known path-loss models. A general and less complex method is also proposed to estimate the cross-polarization discrimination (XPD) factor of close-in reference distance with the XPD (CIX) and ABG with the XPD (ABGX) path-loss models to avoid the computational complexity of minimum mean square error (MMSE) approach. Moreover, small-scale parameters such as root mean square (RMS) delay spread, mean excess (MN-EX) delay, dispersion factors and maximum excess (MAX-EX) delay parameters were used to characterize the multipath channel dispersion. Multiple statistical distributions for RMS delay spread were also investigated. The results show that our proposed models are simpler and more physically-based than other well-known models. The path-loss exponents for all studied models are smaller than that of the free-space model by values in the range of 0.1 to 1.4 for all measured frequencies. The RMS delay spread values varied between 0.2 ns and 13.8 ns, and the dispersion factor values were less than 1 for all measured frequencies. The exponential and Weibull probability distribution models best fit the RMS delay spread empirical distribution for all of the measured frequencies in all scenarios.
An algorithm for propagating the square-root covariance matrix in triangular form
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1976-01-01
A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan
2016-01-01
Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
Least-squares model-based halftoning
NASA Astrophysics Data System (ADS)
Pappas, Thrasyvoulos N.; Neuhoff, David L.
1992-08-01
A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.
McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.
2016-01-01
The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821
NASA Astrophysics Data System (ADS)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
De-noising of 3D multiple-coil MR images using modified LMMSE estimator.
Yaghoobi, Nima; Hasanzadeh, Reza P R
2018-06-20
De-noising is a crucial topic in Magnetic Resonance Imaging (MRI) which focuses on less loss of Magnetic Resonance (MR) image information and details preservation during the noise suppression. Nowadays multiple-coil MRI system is preferred to single one due to its acceleration in the imaging process. Due to the fact that the model of noise in single-coil and multiple-coil MRI systems are different, the de-noising methods that mostly are adapted to single-coil MRI systems, do not work appropriately with multiple-coil one. The model of noise in single-coil MRI systems is Rician while in multiple-coil one (if no subsampling occurs in k-space or GRAPPA reconstruction process is being done in the coils), it obeys noncentral Chi (nc-χ). In this paper, a new filtering method based on the Linear Minimum Mean Square Error (LMMSE) estimator is proposed for multiple-coil MR Images ruined by nc-χ noise. In the presented method, to have an optimum similarity selection of voxels, the Bayesian Mean Square Error (BMSE) criterion is used and proved for nc-χ noise model and also a nonlocal voxel selection methodology is proposed for nc-χ distribution. The results illustrate robust and accurate performance compared to the related state-of-the-art methods, either on ideal nc-χ images or GRAPPA reconstructed ones. Copyright © 2018. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
ERIC Educational Resources Information Center
Li, Libo; Bentler, Peter M.
2011-01-01
MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…
Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.
Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin
2018-03-01
The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics
NASA Astrophysics Data System (ADS)
Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane
2014-10-01
This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...
Tiyip, Tashpolat; Ding, Jianli; Zhang, Dong; Liu, Wei; Wang, Fei; Tashpolat, Nigara
2017-01-01
Effective pretreatment of spectral reflectance is vital to model accuracy in soil parameter estimation. However, the classic integer derivative has some disadvantages, including spectral information loss and the introduction of high-frequency noise. In this paper, the fractional order derivative algorithm was applied to the pretreatment and partial least squares regression (PLSR) was used to assess the clay content of desert soils. Overall, 103 soil samples were collected from the Ebinur Lake basin in the Xinjiang Uighur Autonomous Region of China, and used as data sets for calibration and validation. Following laboratory measurements of spectral reflectance and clay content, the raw spectral reflectance and absorbance data were treated using the fractional derivative order from the 0.0 to the 2.0 order (order interval: 0.2). The ratio of performance to deviation (RPD), determinant coefficients of calibration (Rc2), root mean square errors of calibration (RMSEC), determinant coefficients of prediction (Rp2), and root mean square errors of prediction (RMSEP) were applied to assess the performance of predicting models. The results showed that models built on the fractional derivative order performed better than when using the classic integer derivative. Comparison of the predictive effects of 22 models for estimating clay content, calibrated by PLSR, showed that those models based on the fractional derivative 1.8 order of spectral reflectance (Rc2 = 0.907, RMSEC = 0.425%, Rp2 = 0.916, RMSEP = 0.364%, and RPD = 2.484 ≥ 2.000) and absorbance (Rc2 = 0.888, RMSEC = 0.446%, Rp2 = 0.918, RMSEP = 0.383% and RPD = 2.511 ≥ 2.000) were most effective. Furthermore, they performed well in quantitative estimations of the clay content of soils in the study area. PMID:28934274
Medina, K.D.; Tasker, Gary D.
1987-01-01
This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.
1991-09-01
matrix, the Regression Sum of Squares (SSR) and Error Sum of Squares (SSE) are also displayed as a percentage of the Total Sum of Squares ( SSTO ...vector when the student compares the SSR to the SSE. In addition to the plot, the actual values of SSR, SSE, and SSTO are also provided. Figure 3 gives the...Es ainSpace = E 3 Error- Eor Space =n t! L . Pro~cio q Yonto Pro~rct on of Y onto the simaton, pac ror Space SSR SSEL0.20 IV = 14,1 +IErrorI 2 SSTO
Mitigating voltage lead errors of an AC Josephson voltage standard by impedance matching
NASA Astrophysics Data System (ADS)
Zhao, Dongsheng; van den Brom, Helko E.; Houtzager, Ernest
2017-09-01
A pulse-driven AC Josephson voltage standard (ACJVS) generates calculable AC voltage signals at low temperatures, whereas measurements are performed with a device under test (DUT) at room temperature. The voltage leads cause the output voltage to show deviations that scale with the frequency squared. Error correction mechanisms investigated so far allow the ACJVS to be operational for frequencies up to 100 kHz. In this paper, calculations are presented to deal with these errors in terms of reflected waves. Impedance matching at the source side of the system, which is loaded with a high-impedance DUT, is proposed as an accurate method to mitigate these errors for frequencies up to 1 MHz. Simulations show that the influence of non-ideal component characteristics, such as the tolerance of the matching resistor, the capacitance of the load input impedance, losses in the voltage leads, non-homogeneity in the voltage leads, a non-ideal on-chip connection and inductors between the Josephson junction array and the voltage leads, can be corrected for using the proposed procedures. The results show that an expanded uncertainty of 12 parts in 106 (k = 2) at 1 MHz and 0.5 part in 106 (k = 2) at 100 kHz is within reach.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
NASA Astrophysics Data System (ADS)
Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae
2017-06-01
The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
Estimating Glenoid Width for Instability-Related Bone Loss: A CT Evaluation of an MRI Formula.
Giles, Joshua W; Owens, Brett D; Athwal, George S
2015-07-01
Determining the magnitude of glenoid bone loss in cases of shoulder instability is an important step in selecting the optimal reconstructive procedure. Recently, a formula has been proposed that estimates native glenoid width based on magnetic resonance imaging (MRI) measurements of height (1/3 × glenoid height + 15 mm). This technique, however, has not been validated for use with computed tomography (CT), which is often the preferred imaging modality to assess bone deficiencies. The purpose of this project was 2-fold: (1) to determine if the MRI-based formula that predicts glenoid width from height is valid with CT and (2) to determine if a more accurate regression can be resolved for use specifically with CT data. Descriptive laboratory study. Ninety normal shoulder CT scans with preserved osseous anatomy were drawn from an existing database and analyzed. Measurements of glenoid height and width were performed by 2 observers on reconstructed 3-dimensional models. After assessment of reliability, the data were correlated, and regression models were created for male and female shoulders. The accuracy of the MRI-based model's predictions was then compared with that of the CT-based models. Intra- and interrater reliabilities were good to excellent for height and width, with intraclass correlation coefficients of 0.765 to 0.992. The height and width values had a strong correlation of 0.900 (P < .001). Regression analyses for male and female shoulders produced CT-specific formulas: for men, glenoid width = 2/3 × glenoid height + 5 mm; for women, glenoid width = 2/3 × glenoid height + 3 mm. Comparison of predictions from the MRI- and CT-specific formulas demonstrated good agreement (intraclass correlation coefficient = 0.818). The CT-specific formulas produced a root mean squared error of 1.2 mm, whereas application of the MRI-specific formula to CT images resulted in a root mean squared error of 1.5 mm. Use of the MRI-based formula on CT scans to predict glenoid width produced estimates that were nearly as accurate as the CT-specific formulas. The CT-specific formulas, however, are more accurate at predicting native glenoid width when applied to CT data. Imaging-specific (CT and MRI) formulas have been developed to estimate glenoid bone loss in patients with instability. The CT-specific formula can accurately predict native glenoid width, having an error of only 2.2% of average glenoid width. © 2015 The Author(s).
Smith, Erik A.; Kiesling, Richard L.; Ziegeweid, Jeffrey R.
2017-07-20
Fish habitat can degrade in many lakes due to summer blue-green algal blooms. Predictive models are needed to better manage and mitigate loss of fish habitat due to these changes. The U.S. Geological Survey (USGS), in cooperation with the Minnesota Department of Natural Resources, developed predictive water-quality models for two agricultural land-use dominated lakes in Minnesota—Madison Lake and Pearl Lake, which are part of Minnesota’s sentinel lakes monitoring program—to assess algal community dynamics, water quality, and fish habitat suitability of these two lakes under recent (2014) meteorological conditions. The interaction of basin processes to these two lakes, through the delivery of nutrient loads, were simulated using CE-QUAL-W2, a carbon-based, laterally averaged, two-dimensional water-quality model that predicts distribution of temperature and oxygen from interactions between nutrient cycling, primary production, and trophic dynamics.The CE-QUAL-W2 models successfully predicted water temperature and dissolved oxygen on the basis of the two metrics of mean absolute error and root mean square error. For Madison Lake, the mean absolute error and root mean square error were 0.53 and 0.68 degree Celsius, respectively, for the vertical temperature profile comparisons; for Pearl Lake, the mean absolute error and root mean square error were 0.71 and 0.95 degree Celsius, respectively, for the vertical temperature profile comparisons. Temperature and dissolved oxygen were key metrics for calibration targets. These calibrated lake models also simulated algal community dynamics and water quality. The model simulations presented potential explanations for persistently large total phosphorus concentrations in Madison Lake, key differences in nutrient concentrations between these lakes, and summer blue-green algal bloom persistence.Fish habitat suitability simulations for cool-water and warm-water fish indicated that, in general, both lakes contained a large proportion of good-growth habitat and a sustained period of optimal growth habitat in the summer, without any periods of lethal oxythermal habitat. For Madison and Pearl Lakes, examples of important cool-water fish, particularly game fish, include northern pike (Esox lucius), walleye (Sander vitreus), and black crappie (Pomoxis nigromaculatus); examples of important warm-water fish include bluegill (Lepomis macrochirus), largemouth bass (Micropterus salmoides), and smallmouth bass (Micropterus dolomieu). Sensitivity analyses were completed to understand lake response effects through the use of controlled departures on certain calibrated model parameters and input nutrient loads. These sensitivity analyses also operated as land-use change scenarios because alterations in agricultural practices, for example, could potentially increase or decrease nutrient loads.
Intrinsic Raman spectroscopy for quantitative biological spectroscopy Part II
Bechtel, Kate L.; Shih, Wei-Chuan; Feld, Michael S.
2009-01-01
We demonstrate the effectiveness of intrinsic Raman spectroscopy (IRS) at reducing errors caused by absorption and scattering. Physical tissue models, solutions of varying absorption and scattering coefficients with known concentrations of Raman scatterers, are studied. We show significant improvement in prediction error by implementing IRS to predict concentrations of Raman scatterers using both ordinary least squares regression (OLS) and partial least squares regression (PLS). In particular, we show that IRS provides a robust calibration model that does not increase in error when applied to samples with optical properties outside the range of calibration. PMID:18711512
Analytical and experimental design and analysis of an optimal processor for image registration
NASA Technical Reports Server (NTRS)
Mcgillem, C. D. (Principal Investigator); Svedlow, M.; Anuta, P. E.
1976-01-01
The author has identified the following significant results. A quantitative measure of the registration processor accuracy in terms of the variance of the registration error was derived. With the appropriate assumptions, the variance was shown to be inversely proportional to the square of the effective bandwidth times the signal to noise ratio. The final expressions were presented to emphasize both the form and simplicity of their representation. In the situation where relative spatial distortions exist between images to be registered, expressions were derived for estimating the loss in output signal to noise ratio due to these spatial distortions. These results are in terms of a reduction factor.
Phase-front measurements of an injection-locked AlGaAs laser-diode array
NASA Technical Reports Server (NTRS)
Cornwell, Donald M., Jr.; Rall, Jonathan A. R.; Abshire, James B.
1989-01-01
The phase-front quality of the primary spatial lobe emitted from an injection-locked gain-guided AlGaAs laser-diode array is measured by using an equal-path, phase-shifting Mach-Zehnder interferometer. Root-mean-square phase errors of 0.037 + or - 0.003 wave are measured for the single spatial lobe, which contained 240-mW cw output power in a single longitudinal mode. This phase-front quality corresponds to a Strehl ratio of S = 0.947, which results in a 0.23-dB power loss from the single lobe's ideal diffraction-limited power. These values are comparable with those measured for single-stripe index-guided AlGaAs lasers.
NASA Technical Reports Server (NTRS)
Simon, M.; Tkacenko, A.
2006-01-01
In a previous publication [1], an iterative closed-loop carrier synchronization scheme for binary phase-shift keyed (BPSK) modulation was proposed that was based on feeding back data decisions to the input of the loop, the purpose being to remove the modulation prior to carrier synchronization as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. The idea there was that, with sufficient independence between the received data and the decisions on it that are fed back (as would occur in an error-correction coding environment with sufficient decoding delay), a pure tone in the presence of noise would ultimately be produced (after sufficient iteration and low enough error probability) and thus could be tracked without any squaring loss. This article demonstrates that, with some modification, the same idea of iterative information reduction through decision feedback can be applied to quadrature phase-shift keyed (QPSK) modulation, something that was mentioned in the previous publication but never pursued.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Liu, Zhijian; Liu, Kejun; Li, Hao; Zhang, Xinyu; Jin, Guangya; Cheng, Kewei
2015-01-01
Measurements of heat collection rate and heat loss coefficient are crucial for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, conventional measurement requires expensive detection devices and undergoes a series of complicated procedures. To simplify the measurement and reduce the cost, software based on artificial neural networks for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters was developed. Using multilayer feed-forward neural networks with back-propagation algorithm, we developed and tested our program on the basis of 915measuredsamples of water-in-glass evacuated tube solar water heaters. This artificial neural networks-based software program automatically obtained accurate heat collection rateand heat loss coefficient using simply "portable test instruments" acquired parameters, including tube length, number of tubes, tube center distance, heat water mass in tank, collector area, angle between tubes and ground and final temperature. Our results show that this software (on both personal computer and Android platforms) is efficient and convenient to predict the heat collection rate and heat loss coefficient due to it slow root mean square errors in prediction. The software now can be downloaded from http://t.cn/RLPKF08. PMID:26624613
Liu, Zhijian; Liu, Kejun; Li, Hao; Zhang, Xinyu; Jin, Guangya; Cheng, Kewei
2015-01-01
Measurements of heat collection rate and heat loss coefficient are crucial for the evaluation of in service water-in-glass evacuated tube solar water heaters. However, conventional measurement requires expensive detection devices and undergoes a series of complicated procedures. To simplify the measurement and reduce the cost, software based on artificial neural networks for measuring heat collection rate and heat loss coefficient of water-in-glass evacuated tube solar water heaters was developed. Using multilayer feed-forward neural networks with back-propagation algorithm, we developed and tested our program on the basis of 915 measured samples of water-in-glass evacuated tube solar water heaters. This artificial neural networks-based software program automatically obtained accurate heat collection rate and heat loss coefficient using simply "portable test instruments" acquired parameters, including tube length, number of tubes, tube center distance, heat water mass in tank, collector area, angle between tubes and ground and final temperature. Our results show that this software (on both personal computer and Android platforms) is efficient and convenient to predict the heat collection rate and heat loss coefficient due to it slow root mean square errors in prediction. The software now can be downloaded from http://t.cn/RLPKF08.
Synthesis and optimization of four bar mechanism with six design parameters
NASA Astrophysics Data System (ADS)
Jaiswal, Ankur; Jawale, H. P.
2018-04-01
Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.
40 CFR 63.5340 - How do I determine the allowable HAP loss?
Code of Federal Regulations, 2010 CFR
2010-07-01
... appropriate HAP emission limit, expressed in pounds of HAP loss per 1,000 square feet of leather processed... the annual total of leather processed in 1,000's of square feet for each product process operation in... of square feet of leather processed in the previous 12 months in product process operation “i”. HAP...
40 CFR 63.5340 - How do I determine the allowable HAP loss?
Code of Federal Regulations, 2011 CFR
2011-07-01
... appropriate HAP emission limit, expressed in pounds of HAP loss per 1,000 square feet of leather processed... the annual total of leather processed in 1,000's of square feet for each product process operation in... of square feet of leather processed in the previous 12 months in product process operation “i”. HAP...
Theoretical and experimental studies of error in square-law detector circuits
NASA Technical Reports Server (NTRS)
Stanley, W. D.; Hearn, C. P.; Williams, J. B.
1984-01-01
Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
Error-related negativities elicited by monetary loss and cues that predict loss.
Dunning, Jonathan P; Hajcak, Greg
2007-11-19
Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.
USDA-ARS?s Scientific Manuscript database
Retention of bolls and squares (referred to as fruit retention), boll damage, and resulting cotton lint and seed weight loss were assessed when two (2010) and three (2011) age classes of sympodial fruiting branches with different ages of squares and bolls where exposed to verde plant bug, Creontiade...
40 CFR 63.5340 - How do I determine the allowable HAP loss?
Code of Federal Regulations, 2013 CFR
2013-07-01
... select the appropriate HAP emission limit, expressed in pounds of HAP loss per 1,000 square feet of... months. Next, determine the annual total of leather processed in 1,000's of square feet for each product... Total of Leather Processed = 1,000's of square feet of leather processed in the previous 12 months in...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Product Process Operation HAP Emission Limit (pounds of HAP loss per 1,000 square feet of leather processed) Existingsources Newsources 1. Upholstery Leather (≥4 grams add-on/square feet) 2.6 0.5 2. Upholstery Leather (square feet) 6.8 2.5 3. Water-resistant (≥5,000 Maeser Flexes)/Specialty...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Product Process Operation HAP Emission Limit (pounds of HAP loss per 1,000 square feet of leather processed) Existingsources Newsources 1. Upholstery Leather (≥4 grams add-on/square feet) 2.6 0.5 2. Upholstery Leather (square feet) 6.8 2.5 3. Water-resistant (≥5,000 Maeser Flexes)/Specialty...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Product Process Operation HAP Emission Limit (pounds of HAP loss per 1,000 square feet of leather processed) Existingsources Newsources 1. Upholstery Leather (≥4 grams add-on/square feet) 2.6 0.5 2. Upholstery Leather (square feet) 6.8 2.5 3. Water-resistant (≥5,000 Maeser Flexes)/Specialty...
40 CFR 63.5340 - How do I determine the allowable HAP loss?
Code of Federal Regulations, 2014 CFR
2014-07-01
... select the appropriate HAP emission limit, expressed in pounds of HAP loss per 1,000 square feet of... months. Next, determine the annual total of leather processed in 1,000's of square feet for each product... Total of Leather Processed = 1,000's of square feet of leather processed in the previous 12 months in...
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
1980-09-01
HASTRUP , T REAL UNCLASSIFIED SACLAATCEN- SM-139 N SACLANTCEN Memorandum SM -139 -LEFW SACLANT ASW RESEARCH CENTRE ~ MEMORANDUM A SIMPLE FORMULA TO...CALCULATE SHALLOW-WATER TRANSMISSION LOSS BY MEANS OF A LEAST- SQUARES SURFACE FIT TECHNIQUE 7-sallby OLE F. HASTRUP and TUNCAY AKAL I SEPTEMBER 1980 NORTH...JRANSi4ISSION LOSS/ BY MEANS OF A LEAST-SQUARES SURFACE fIT TECHNIQUE, C T ~e F./ Hastrup .0TnaAa ()1 Sep 8 This memorandum has been prepared within the
A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong
2001-01-01
This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Highly Compact Circulators in Square-Lattice Photonic Crystal Waveguides
Jin, Xin; Ouyang, Zhengbiao; Wang, Qiong; Lin, Mi; Wen, Guohua; Wang, Jingjing
2014-01-01
We propose, demonstrate and investigate highly compact circulators with ultra-low insertion loss in square-lattice- square-rod-photonic-crystal waveguides. Only a single magneto- optical square rod is required to be inserted into the cross center of waveguides, making the structure very compact and ultra efficient. The square rods around the center defect rod are replaced by several right-angled-triangle rods, reducing the insertion loss further and promoting the isolations as well. By choosing a linear-dispersion region and considering the mode patterns in the square magneto-optical rod, the operating mechanism of the circulator is analyzed. By applying the finite-element method together with the Nelder-Mead optimization method, an extremely low insertion loss of 0.02 dB for the transmitted wave and ultra high isolation of 46 dB∼48 dB for the isolated port are obtained. The idea presented can be applied to build circulators in different wavebands, e.g., microwave or Tera-Hertz. PMID:25415417
Highly compact circulators in square-lattice photonic crystal waveguides.
Jin, Xin; Ouyang, Zhengbiao; Wang, Qiong; Lin, Mi; Wen, Guohua; Wang, Jingjing
2014-01-01
We propose, demonstrate and investigate highly compact circulators with ultra-low insertion loss in square-lattice- square-rod-photonic-crystal waveguides. Only a single magneto- optical square rod is required to be inserted into the cross center of waveguides, making the structure very compact and ultra efficient. The square rods around the center defect rod are replaced by several right-angled-triangle rods, reducing the insertion loss further and promoting the isolations as well. By choosing a linear-dispersion region and considering the mode patterns in the square magneto-optical rod, the operating mechanism of the circulator is analyzed. By applying the finite-element method together with the Nelder-Mead optimization method, an extremely low insertion loss of 0.02 dB for the transmitted wave and ultra high isolation of 46 dB∼48 dB for the isolated port are obtained. The idea presented can be applied to build circulators in different wavebands, e.g., microwave or Tera-Hertz.
Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model
NASA Astrophysics Data System (ADS)
Yu, Lean; Wang, Shouyang; Lai, K. K.
Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.
Error analysis on squareness of multi-sensor integrated CMM for the multistep registration method
NASA Astrophysics Data System (ADS)
Zhao, Yan; Wang, Yiwen; Ye, Xiuling; Wang, Zhong; Fu, Luhua
2018-01-01
The multistep registration(MSR) method in [1] is to register two different classes of sensors deployed on z-arm of CMM(coordinate measuring machine): a video camera and a tactile probe sensor. In general, it is difficult to obtain a very precise registration result with a single common standard, instead, this method is achieved by measuring two different standards with a constant distance between them two which are fixed on a steel plate. Although many factors have been considered such as the measuring ability of sensors, the uncertainty of the machine and the number of data pairs, there is no exact analysis on the squareness between the x-axis and the y-axis on the xy plane. For this sake, error analysis on the squareness of multi-sensor integrated CMM for the multistep registration method will be made to examine the validation of the MSR method. Synthetic experiments on the squareness on the xy plane for the simplified MSR with an inclination rotation are simulated, which will lead to a regular result. Experiments have been carried out with the multi-standard device designed also in [1], meanwhile, inspections with the help of a laser interferometer on the xy plane have been carried out. The final results are conformed to the simulations, and the squareness errors of the MSR method are also similar to the results of interferometer. In other word, the MSR can also adopted/utilized to verify the squareness of a CMM.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
ERIC Educational Resources Information Center
Wilson, Celia M.
2010-01-01
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Can you trust the parametric standard errors in nonlinear least squares? Yes, with provisos.
Tellinghuisen, Joel
2018-04-01
Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted. The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting e A , ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data. Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is <10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties. Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance. Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small - a condition favored by the abundant, precise data routinely collected in many modern instrumental methods. Copyright © 2018 Elsevier B.V. All rights reserved.
Application of Ruze Equation for Inflatable Aperture Antennas
NASA Technical Reports Server (NTRS)
Welch, Bryan W.
2008-01-01
Inflatable aperture reflector antennas are an emerging technology that NASA is investigating for potential uses in science and exploration missions. As inflatable aperture antennas have not been proven fully qualified for space missions, they must be characterized properly so that the behavior of the antennas can be known in advance. To properly characterize the inflatable aperture antenna, testing must be performed in a relevant environment, such as a vacuum chamber. Since the capability of having a radiofrequency (RF) test facility inside a vacuum chamber did not exist at NASA Glenn Research Center, a different methodology had to be utilized. The proposal to test an inflatable aperture antenna in a vacuum chamber entailed performing a photogrammetry study of the antenna surface by using laser ranging measurements. A root-mean-square (rms) error term was derived from the photogrammetry study to calculate the antenna surface loss as described by the Ruze equation. However, initial testing showed that problems existed in using the Ruze equation to calculate the loss due to errors on the antenna surface. This study utilized RF measurements obtained in a near-field antenna range and photogrammetry data taken from a laser range scanner to compare the expected performance of the test antenna (via the Ruze equation) with the actual RF patterns and directivity measurements. Results showed that the Ruze equation overstated the degradation in the directivity calculation. Therefore, when the photogrammetry study is performed on the test antennas in the vacuum chamber, a more complex equation must be used in light of the fact that the Ruze theory overstates the loss in directivity for inflatable aperture reflector antennas.
Robust radio interferometric calibration using the t-distribution
NASA Astrophysics Data System (ADS)
Kazemi, S.; Yatawatta, S.
2013-10-01
A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.
Medina, K.D.; Tasker, Gary D.
1985-01-01
The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)
Bayesian-based localization of wireless capsule endoscope using received signal strength.
Nadimi, Esmaeil S; Blanes-Vidal, Victoria; Tarokh, Vahid; Johansen, Per Michael
2014-01-01
In wireless body area sensor networking (WBASN) applications such as gastrointestinal (GI) tract monitoring using wireless video capsule endoscopy (WCE), the performance of out-of-body wireless link propagating through different body media (i.e. blood, fat, muscle and bone) is still under investigation. Most of the localization algorithms are vulnerable to the variations of path-loss coefficient resulting in unreliable location estimation. In this paper, we propose a novel robust probabilistic Bayesian-based approach using received-signal-strength (RSS) measurements that accounts for Rayleigh fading, variable path-loss exponent and uncertainty in location information received from the neighboring nodes and anchors. The results of this study showed that the localization root mean square error of our Bayesian-based method was 1.6 mm which was very close to the optimum Cramer-Rao lower bound (CRLB) and significantly smaller than that of other existing localization approaches (i.e. classical MDS (64.2mm), dwMDS (32.2mm), MLE (36.3mm) and POCS (2.3mm)).
A Parametric k-Means Algorithm
Tarpey, Thaddeus
2007-01-01
Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692
Planting data and wheat yield models. [Kansas, South Dakota, and U.S.S.R.
NASA Technical Reports Server (NTRS)
Feyerherm, A. M. (Principal Investigator)
1977-01-01
The author has identified the following significant results. A variable date starter model for spring wheat depending on temperature was more precise than a fixed date model. The same conclusions for fall-planted wheat were not reached. If the largest and smallest of eight temperatures were used to estimate daily maximum and minimum temperatures; respectively, a 1-4 F bias would be introduced into these extremes. For Kansas, a reduction of 0.5 bushels/acre in the root-mean-square-error between model and SRS yields was achieved by a six fold increase (7 to 42) in the density of weather stations. An additional reduction of 0.3 b/A was achieved by incorporating losses due to rusts in the model.
Mishra, Vishal
2015-01-01
The interchange of the protons with the cell wall-bound calcium and magnesium ions at the interface of solution/bacterial cell surface in the biosorption system at various concentrations of protons has been studied in the present work. A mathematical model for establishing the correlation between concentration of protons and active sites was developed and optimized. The sporadic limited residence time reactor was used to titrate the calcium and magnesium ions at the individual data point. The accuracy of the proposed mathematical model was estimated using error functions such as nonlinear regression, adjusted nonlinear regression coefficient, the chi-square test, P-test and F-test. The values of the chi-square test (0.042-0.017), P-test (<0.001-0.04), sum of square errors (0.061-0.016), root mean square error (0.01-0.04) and F-test (2.22-19.92) reported in the present research indicated the suitability of the model over a wide range of proton concentrations. The zeta potential of the bacterium surface at various concentrations of protons was observed to validate the denaturation of active sites.
Simple Forest Canopy Thermal Exitance Model
NASA Technical Reports Server (NTRS)
Smith J. A.; Goltz, S. M.
1999-01-01
We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Operation HAP Emission Limit (pounds of HAP loss per 1,000 square feet of leather processed) Existingsources Newsources 1. Upholstery Leather (≥4 grams add-on/square feet) 2.6 0.5 2. Upholstery Leather (square feet) 6.8 2.5 3. Water-resistant (≥5,000 Maeser Flexes)/Specialty Leather 5.6 4.9 4. Nonwater...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Operation HAP Emission Limit (pounds of HAP loss per 1,000 square feet of leather processed) Existingsources Newsources 1. Upholstery Leather (≥4 grams add-on/square feet) 2.6 0.5 2. Upholstery Leather (square feet) 6.8 2.5 3. Water-resistant (≥5,000 Maeser Flexes)/Specialty Leather 5.6 4.9 4. Nonwater...
Measuring Dispersion Effects of Factors in Factorial Experiments.
1988-01-01
error is MSE =i=l j=1 i n r (SSE/(N-p)), the sum of squares of pure error is SSPE = Z E Y i=1 j=1 and the mean square of pure error is MSPE - ( SSPE /n...the level of the factor in the ith run is 0. 3.1. First Measure We have n r n r SSPE = 1 Is it -yi) 2 + E r (1-8 )(yjj li-l j=l (iYjj +i= j=l l - i...The first component in SSPE corresponds to level I of the factor and has n degrees of freedom ( E 6i)(r-I). The second component corresponds to i=l n
Error behaviors associated with loss of competency in Alzheimer's disease.
Marson, D C; Annis, S M; McInturff, B; Bartolucci, A; Harrell, L E
1999-12-10
To investigate qualitative behavioral changes associated with declining medical decision-making capacity (competency) in patients with AD. Qualitative measures can yield clinical information about functional changes in neurologic disease not available through quantitative measures. Normal older controls (n = 21) and patients with mild and moderate probable AD (n = 72) were compared using a standardized competency measure and neuropsychological measures. A system of 16 qualitative error scores representing conceptual domains of language, executive dysfunction, affective dysfunction, and compensatory responses was used to analyze errors produced on the competency measure. Patterns of errors were examined across groups. Relationships between error behaviors and competency performance were determined, and neurocognitive correlates of specific error behaviors were identified. AD patients demonstrated more miscomprehension, factual confusion, intrusions, incoherent responses, nonresponsive answers, loss of task, and delegation than controls. Errors in the executive domain (loss of task, nonresponsive answer, and loss of detachment) were key predictors of declining competency performance by AD patients. Neuropsychological analyses in the AD group generally confirmed the conceptual domain assignments of the qualitative scores. Loss of task, nonresponsive answers, and loss of detachment were key behavioral changes associated with declining competency of AD patients and with neurocognitive measures of executive dysfunction. These findings support the growing linkage between executive dysfunction and competency loss.
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
Decay in blood loss estimation skills after web-based didactic training.
Toledo, Paloma; Eosakul, Stanley T; Goetz, Kristopher; Wong, Cynthia A; Grobman, William A
2012-02-01
Accuracy in blood loss estimation has been shown to improve immediately after didactic training. The objective of this study was to evaluate retention of blood loss estimation skills 9 months after a didactic web-based training. Forty-four participants were recruited from a cohort that had undergone web-based training and testing in blood loss estimation. The web-based posttraining test, consisting of pictures of simulated blood loss, was repeated 9 months after the initial training and testing. The primary outcome was the difference in accuracy of estimated blood loss (percent error) at 9 months compared with immediately posttraining. At the 9-month follow-up, the median error in estimation worsened to -34.6%. Although better than the pretraining error of -47.8% (P = 0.003), the 9-month error was significantly less accurate than the immediate posttraining error of -13.5% (P = 0.01). Decay in blood loss estimation skills occurs by 9 months after didactic training.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
NASA Astrophysics Data System (ADS)
Gidey, Amanuel
2018-06-01
Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
[NIR Assignment of Magnolol by 2D-COS Technology and Model Application Huoxiangzhengqi Oral Liduid].
Pei, Yan-ling; Wu, Zhi-sheng; Shi, Xin-yuan; Pan, Xiao-ning; Peng, Yan-fang; Qiao, Yan-jiang
2015-08-01
Near infrared (NIR) spectroscopy assignment of Magnolol was performed using deuterated chloroform solvent and two-dimensional correlation spectroscopy (2D-COS) technology. According to the synchronous spectra of deuterated chloroform solvent and Magnolol, 1365~1455, 1600~1720, 2000~2181 and 2275~2465 nm were the characteristic absorption of Magnolol. Connected with the structure of Magnolol, 1440 nm was the stretching vibration of phenolic group O-H, 1679 nm was the stretching vibration of aryl and methyl which connected with aryl, 2117, 2304, 2339 and 2370 nm were the combination of the stretching vibration, bending vibration and deformation vibration for aryl C-H, 2445 nm were the bending vibration of methyl which linked with aryl group, these bands attribut to the characteristics of Magnolol. Huoxiangzhengqi Oral Liduid was adopted to study the Magnolol, the characteristic band by spectral assignment and the band by interval Partial Least Squares (iPLS) and Synergy interval Partial Least Squares (SiPLS) were used to establish Partial Least Squares (PLS) quantitative model, the coefficient of determination Rcal(2) and Rpre(2) were greater than 0.99, the Root Mean of Square Error of Calibration (RM-SEC), Root Mean of Square Error of Cross Validation (RMSECV) and Root Mean of Square Error of Prediction (RMSEP) were very small. It indicated that the characteristic band by spectral assignment has the same results with the Chemometrics in PLS model. It provided a reference for NIR spectral assignment of chemical compositions in Chinese Materia Medica, and the band filters of NIR were interpreted.
Use of satellite and modeled soil moisture data for predicting event soil loss at plot scale
NASA Astrophysics Data System (ADS)
Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.
2015-09-01
The potential of coupling soil moisture and a Universal Soil Loss Equation-based (USLE-based) model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e., the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the USLE enhances the capability of the model to account for variations in event soil losses, the soil moisture being an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to ~ 0.35 and a root mean square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.
An empirical model for estimating solar radiation in the Algerian Sahara
NASA Astrophysics Data System (ADS)
Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous
2018-05-01
The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Validation of Core Temperature Estimation Algorithm
2016-01-29
plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.
Systematic Error Modeling and Bias Estimation
Zhang, Feihu; Knoll, Alois
2016-01-01
This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386
Ultrasound monitoring of inter-knee distances during gait.
Lai, Daniel T H; Wrigley, Tim V; Palaniswami, M
2009-01-01
Knee osteoarthritis is an extremely common, debilitating disease associated with pain and loss of function. There is considerable interest in monitoring lower limb alignment due to its close association with joint overload leading to disease progression. The effects of gait modifications that can lower joint loading are of particular interest. Here we describe an ultrasound-based system for monitoring an important aspect of dynamic lower limb alignment, the inter-knee distance during walking. Monitoring this gait parameter should facilitate studies in reducing knee loading, a primary risk factor of knee osteoarthritis progression. The portable device is composed of an ultrasound sensor connected to an Intel iMote2 equipped with Bluetooth wireless capability. Static tests and calibration results show that the sensor possesses an effective beam envelope of 120 degrees, with maximum distance errors of 10% at the envelope edges. Dynamic walking trials reveal close correlation of inter-knee distance trends between that measured by an optical system (Optotrak Certus NDI) and the sensor device. The maximum average root mean square error was found to be 1.46 cm. Future work will focus on improving the accuracy of the device.
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
Multivariate Welch t-test on distances
2016-01-01
Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741
Multivariate Welch t-test on distances.
Alekseyenko, Alexander V
2016-12-01
Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.
Hammes, Florian; Hille, Thomas; Kissel, Thomas
2014-02-01
A process analytical method using reflectance infrared spectrometry was developed for the in-line monitoring of the amount of the active pharmaceutical ingredient (API) nicotine during a coating process for an oral thin film (OTF). In-line measurements were made using a reflectance infrared (RI) sensor positioned after the last drying zone of the coating line. Real-time spectra from the coating process were used for modelling the nicotine content. Partial least squares (PLS1) calibration models with different data pre-treatments were generated. The calibration model with the most comparable standard error of calibration (SEC) and the standard error of cross validation (SECV) was selected for an external validation run on the production coating line with an independent laminate. Good correlations could be obtained between values estimated from the reflectance infrared data and the reference HPLC test method, respectively. With in-line measurements it was possible to allow real-time adjustments during the production process to keep product specifications within predefined limits hence avoiding loss of material and batch. Copyright © 2013 Elsevier B.V. All rights reserved.
Photogrammetric Method and Software for Stream Planform Identification
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.
2013-12-01
Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points
Cursor Control Device Test Battery
NASA Technical Reports Server (NTRS)
Holden, Kritina; Sandor, Aniko; Pace, John; Thompson, Shelby
2013-01-01
The test battery was developed to provide a standard procedure for cursor control device evaluation. The software was built in Visual Basic and consists of nine tasks and a main menu that integrates the set-up of the tasks. The tasks can be used individually, or in a series defined in the main menu. Task 1, the Unidirectional Pointing Task, tests the speed and accuracy of clicking on targets. Two rectangles with an adjustable width and adjustable center- to-center distance are presented. The task is to click back and forth between the two rectangles. Clicks outside of the rectangles are recorded as errors. Task 2, Multidirectional Pointing Task, measures speed and accuracy of clicking on targets approached from different angles. Twenty-five numbered squares of adjustable width are arranged around an adjustable diameter circle. The task is to point and click on the numbered squares (placed on opposite sides of the circle) in consecutive order. Clicks outside of the squares are recorded as errors. Task 3, Unidirectional (horizontal) Dragging Task, is similar to dragging a file into a folder on a computer desktop. Task 3 requires dragging a square of adjustable width from one rectangle and dropping it into another. The width of each rectangle is adjustable, as well as the distance between the two rectangles. Dropping the square outside of the rectangles is recorded as an error. Task 4, Unidirectional Path Following, is similar to Task 3. The task is to drag a square through a tunnel consisting of two lines. The size of the square and the width of the tunnel are adjustable. If the square touches any of the lines, it is counted as an error and the task is restarted. Task 5, Text Selection, involves clicking on a Start button, and then moving directly to the underlined portion of the displayed text and highlighting it. The pointing distance to the text is adjustable, as well as the to-be-selected font size and the underlined character length. If the selection does not include all of the underlined characters, or includes non-underlined characters, it is recorded as an error. Task 6, Multi-size and Multi-distance Pointing, presents the participant with 24 consecutively numbered buttons of different sizes (63 to 163 pixels), and at different distances (60 to 80 pixels) from the Start button. The task is to click on the Start button, and then move directly to, and click on, each numbered target button in consecutive order. Clicks outside of the target area are errors. Task 7, Standard Interface Elements Task, involves interacting with standard interface elements as instructed in written procedures, including: drop-down menus, sliders, text boxes, radio buttons, and check boxes. Task completion time is recorded. In Task 8, a circular track is presented with a disc in it at the top. Track width and disc size are adjustable. The task is to move the disc with circular motion within the path without touching the boundaries of the track. Time and errors are recorded. Task 9 is a discrete task that allows evaluation of discrete cursor control devices that tab from target to target, such as a castle switch. The task is to follow a predefined path and to click on the yellow targets along the path.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields
NASA Astrophysics Data System (ADS)
Bettadpur, S.
2012-04-01
The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.
Retrieval of the aerosol optical thickness from UV global irradiance measurements
NASA Astrophysics Data System (ADS)
Costa, M. J.; Salgueiro, V.; Bortoli, D.; Obregón, M. A.; Antón, M.; Silva, A. M.
2015-12-01
The UV irradiance is measured at Évora since several years, where a CIMEL sunphotometer integrated in AERONET is also installed. In the present work, measurements of UVA (315 - 400 nm) irradiances taken with Kipp&Zonen radiometers, as well as satellite data of ozone total column values, are used in combination with radiative transfer calculations, to estimate the aerosol optical thickness (AOT) in the UV. The retrieved UV AOT in Évora is compared with AERONET AOT (at 340 and 380 nm) and a fairly good agreement is found with a root mean square error of 0.05 (normalized root mean square error of 8.3%) and a mean absolute error of 0.04 (mean percentage error of 2.9%). The methodology is then used to estimate the UV AOT in Sines, an industrialized site on the Atlantic western coast, where the UV irradiance is monitored since 2013 but no aerosol information is available.
Uncertainty based pressure reconstruction from velocity measurement with generalized least squares
NASA Astrophysics Data System (ADS)
Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos
2017-11-01
A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Online measurement of urea concentration in spent dialysate during hemodialysis.
Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J
2004-01-01
We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.
Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh
2017-03-01
Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer feedback, fear of punishment or job loss were considered as reasons for under reporting of medical errors. This research demonstrates the need for greater attention to be paid to the causes of medical errors.
Criterion Predictability: Identifying Differences Between [r-squares
ERIC Educational Resources Information Center
Malgady, Robert G.
1976-01-01
An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)
Piccolo, Brian D; Keim, Nancy L; Fiehn, Oliver; Adams, Sean H; Van Loan, Marta D; Newman, John W
2015-04-01
Total weight loss induced by energy restriction is highly variable even under tightly controlled conditions. Identifying weight-loss discriminants would provide a valuable weight management tool and insights into body weight regulation. This study characterized responsiveness to energy restriction in adults from variables including the plasma metabolome, endocrine and inflammatory markers, clinical indices, body composition, diet, and physical activity. Data were derived from a controlled feeding trial investigating the effect of 3-4 dairy product servings in an energy-restricted diet (2092 kJ/d reduction) over 12 wk. Partial least squares regression was used to identify weight-loss discriminants in 67 overweight and obese adults. Linear mixed models were developed to identify discriminant variable differences in high- vs. low-weight-loss responders. Both pre- and postintervention variables (n = 127) were identified as weight-loss discriminants (root mean squared error of prediction = 1.85 kg; Q(2) = 0.43). Compared with low-responders (LR), high-responders (HR) had greater decreases in body weight (LR: 2.7 ± 1.6 kg; HR: 9.4 ± 1.8 kg, P < 0.01), BMI (in kg/m(2); LR: 1.0 ± 0.6; HR: 3.3 ± 0.5, P < 0.01), and total fat (LR: 2.2 ± 1.1 kg; HR: 8.0 ± 2.1 kg, P < 0.01). Significant group effects unaffected by the intervention were determined for the respiratory exchange ratio (LR: 0.86 ± 0.05; HR: 0.82 ± 0.03, P < 0.01), moderate physical activity (LR: 127 ± 52 min; HR: 167 ± 68 min, P = 0.02), sedentary activity (LR: 1090 ± 99 min; HR: 1017 ± 110 min, P = 0.02), and plasma stearate [LR: 102,000 ± 21,000 quantifier ion peak height (QIPH); HR: 116,000 ± 24,000 QIPH, P = 0.01]. Overweight and obese individuals highly responsive to energy restriction had accelerated reductions in adiposity, likely supported in part by higher lipid mobilization and combustion. A novel observation was that person-to-person differences in habitual physical activity and magnitude of weight loss were accompanied by unique blood metabolite signatures. This trial was registered at clinicaltrials.gov as NCT00858312. © 2015 American Society for Nutrition.
Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model
ERIC Educational Resources Information Center
Kang, Taehoon; Chen, Troy T.
2011-01-01
The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
1984-12-01
total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni
NASA Technical Reports Server (NTRS)
Rice, R. F.
1976-01-01
The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.
Teng, C-C; Chai, H; Lai, D-M; Wang, S-F
2007-02-01
Previous research has shown that there is no significant relationship between the degree of structural degeneration of the cervical spine and neck pain. We therefore sought to investigate the potential role of sensory dysfunction in chronic neck pain. Cervicocephalic kinesthetic sensibility, expressed by how accurately an individual can reposition the head, was studied in three groups of individuals, a control group of 20 asymptomatic young adults and two groups of middle-aged adults (20 subjects in each group) with or without a history of mild neck pain. An ultrasound-based three-dimensional coordinate measuring system was used to measure the position of the head and to test the accuracy of repositioning. Constant error (indicating that the subject overshot or undershot the intended position) and root mean square errors (representing total errors of accuracy and variability) were measured during repositioning of the head to the neutral head position (Head-to-NHP) and repositioning of the head to the target (Head-to-Target) in three cardinal planes (sagittal, transverse, and frontal). Analysis of covariance (ANCOVA) was used to test the group effect, with age used as a covariate. The constant errors during repositioning from a flexed position and from an extended position to the NHP were significantly greater in the middle-aged subjects than in the control group (beta=0.30 and beta=0.60, respectively; P<0.05 for both). In addition, the root mean square errors during repositioning from a flexed or extended position to the NHP were greater in the middle-aged subjects than in the control group (beta=0.27 and beta=0.49, respectively; P<0.05 for both). The root mean square errors also increased during Head-to-Target in left rotation (beta=0.24;P<0.05), but there was no difference in the constant errors or root mean square errors during Head-to-NHP repositioning from other target positions (P>0.05). The results indicate that, after controlling for age as a covariate, there was no group effect. Thus, age appears to have a profound effect on an individual's ability to accurately reposition the head toward the neutral position in the sagittal plane and repositioning the head toward left rotation. A history of mild chronic neck pain alone had no significant effect on cervicocephalic kinesthetic sensibility.
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
Vector quantizer designs for joint compression and terrain categorization of multispectral imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Lyons, Daniel F.
1994-01-01
Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.
Quantitative Modelling of Trace Elements in Hard Coal.
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Felus, Yaron A.
2008-06-01
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
Quantitative Modelling of Trace Elements in Hard Coal
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794
Use of satellite and modelled soil moisture data for predicting event soil loss at plot scale
NASA Astrophysics Data System (ADS)
Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.
2015-03-01
The potential of coupling soil moisture and a~USLE-based model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in Central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e. the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the RUSLE/USLE, enhances the capability of the model to account for variations in event soil losses, being the soil moisture an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to of ~ 0.35 and a root-mean-square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
The Influence of Dimensionality on Estimation in the Partial Credit Model.
ERIC Educational Resources Information Center
De Ayala, R. J.
1995-01-01
The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…
Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.
2008-01-01
A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).
Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed
2017-11-01
The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.
Rapid Detection of Volatile Oil in Mentha haplocalyx by Near-Infrared Spectroscopy and Chemometrics.
Yan, Hui; Guo, Cheng; Shao, Yang; Ouyang, Zhen
2017-01-01
Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . The effects of data pre-processing methods on the accuracy of the PLSR calibration models were investigated. The performance of the final model was evaluated according to the correlation coefficient ( R ) and root mean square error of prediction (RMSEP). For PLSR model, the best preprocessing method combination was first-order derivative, standard normal variate transformation (SNV), and mean centering, which had of 0.8805, of 0.8719, RMSEC of 0.091, and RMSEP of 0.097, respectively. The wave number variables linking to volatile oil are from 5500 to 4000 cm-1 by analyzing the loading weights and variable importance in projection (VIP) scores. For SVM model, six LVs (less than seven LVs in PLSR model) were adopted in model, and the result was better than PLSR model. The and were 0.9232 and 0.9202, respectively, with RMSEC and RMSEP of 0.084 and 0.082, respectively, which indicated that the predicted values were accurate and reliable. This work demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in M. haplocalyx . The quality of medicine directly links to clinical efficacy, thus, it is important to control the quality of Mentha haplocalyx . Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . For SVM model, 6 LVs (less than 7 LVs in PLSR model) were adopted in model, and the result was better than PLSR model. It demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in Mentha haplocalyx . Abbreviations used: 1 st der: First-order derivative; 2 nd der: Second-order derivative; LOO: Leave-one-out; LVs: Latent variables; MC: Mean centering, NIR: Near-infrared; NIRS: Near infrared spectroscopy; PCR: Principal component regression, PLSR: Partial least squares regression; RBF: Radial basis function; RMSEC: Root mean square error of cross validation, RMSEC: Root mean square error of calibration; RMSEP: Root mean square error of prediction; SNV: Standard normal variate transformation; SVM: Support vector machine; VIP: Variable Importance in projection.
Navy Fuel Composition and Screening Tool (FCAST) v2.8
2016-05-10
allowed us to develop partial least squares (PLS) models based on gas chromatography–mass spectrometry (GC-MS) data that predict fuel properties. The...Chemometric property modeling Partial least squares PLS Compositional profiler Naval Air Systems Command Air-4.4.5 Patuxent River Naval Air Station Patuxent...Cumulative predicted residual error sum of squares DiEGME Diethylene glycol monomethyl ether FCAST Fuel Composition and Screening Tool FFP Fit for
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Latin-square three-dimensional gage master
Jones, L.
1981-05-12
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Latin square three dimensional gage master
Jones, Lynn L.
1982-01-01
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
VizieR Online Data Catalog: delta Cep VEGA/CHARA observing log (Nardetto+, 2016)
NASA Astrophysics Data System (ADS)
Nardetto, N.; Merand, A.; Mourard, D.; Storm, J.; Gieren, W.; Fouque, P.; Gallenne, A.; Graczyk, D.; Kervella, P.; Neilson, H.; Pietrzynski, G.; Pilecki, B.; Breitfelder, J.; Berio, P.; Challouf, M.; Clausse, J.-M.; Ligi, R.; Mathias, P.; Meilland, A.; Perraut, K.; Poretti, E.; Rainer, M.; Spang, A.; Stee, P.; Tallon-Bosc, I.; Ten Brummelaar, T.
2016-07-01
The columns give, respectively, the date, the RJD, the hour angle (HA), the minimum and maximum wavelengths over which the squared visibility is calculated, the projected baseline length Bp and its orientation PA, the signal-to-noise ratio on the fringe peak; the last column provides the calibrated squared visibility V2 together with the statistic error on V2, and the systematic error on V2 (see text for details). The data are available on the Jean-Marie Mariotti Center OiDB service (Available at http://oidb.jmmc.fr). (1 data file).
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Comparison of structural and least-squares lines for estimating geologic relations
Williams, G.P.; Troutman, B.M.
1990-01-01
Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.
Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor
NASA Astrophysics Data System (ADS)
Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.
2018-03-01
Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
Blackout detection as a multiobjective optimization problem.
Chaudhary, A M; Trachtenberg, E A
1991-01-01
We study new fast computational procedures for a pilot blackout (total loss of vision) detection in real time. Their validity is demonstrated by data acquired during experiments with volunteer pilots on a human centrifuge. A new systematic class of very fast suboptimal group filters is employed. The utilization of various inherent group invariancies of signals involved allows us to solve the detection problem via estimation with respect to many performance criteria. The complexity of the procedures in terms of the number of computer operations required for their implementation is investigated. Various classes of such prediction procedures are investigated, analyzed and trade offs are established. Also we investigated the validity of suboptimal filtering using different group filters for different performance criteria, namely: the number of false detections, the number of missed detections, the accuracy of detection and the closeness of all procedures to a certain bench mark technique in terms of dispersion squared (mean square error). The results are compared to recent studies of detection of evoked potentials using estimation. The group filters compare favorably with conventional techniques in many cases with respect to the above mentioned criteria. Their main advantage is the fast computational processing.
Quantum-classical boundary for precision optical phase estimation
NASA Astrophysics Data System (ADS)
Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo
2017-12-01
Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.
Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.
1965-01-01
1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087
High-resolution global maps of 21st-century forest cover change
Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, Thomas R.; Kommareddy, A.; Egorov, Alexey; Chini, L.; Justice, C.O.; Townshend, J.R.G.
2013-01-01
Quantification of global forest change has been lacking despite the recognized importance of forest ecosystem services. In this study, Earth observation satellite data were used to map global forest loss (2.3 million square kilometers) and gain (0.8 million square kilometers) from 2000 to 2012 at a spatial resolution of 30 meters. The tropics were the only climate domain to exhibit a trend, with forest loss increasing by 2101 square kilometers per year. Brazil’s well-documented reduction in deforestation was offset by increasing forest loss in Indonesia, Malaysia, Paraguay, Bolivia, Zambia, Angola, and elsewhere. Intensive forestry practiced within subtropical forests resulted in the highest rates of forest change globally. Boreal forest loss due largely to fire and forestry was second to that in the tropics in absolute and proportional terms. These results depict a globally consistent and locally relevant record of forest change.
High-resolution global maps of 21st-century forest cover change.
Hansen, M C; Potapov, P V; Moore, R; Hancher, M; Turubanova, S A; Tyukavina, A; Thau, D; Stehman, S V; Goetz, S J; Loveland, T R; Kommareddy, A; Egorov, A; Chini, L; Justice, C O; Townshend, J R G
2013-11-15
Quantification of global forest change has been lacking despite the recognized importance of forest ecosystem services. In this study, Earth observation satellite data were used to map global forest loss (2.3 million square kilometers) and gain (0.8 million square kilometers) from 2000 to 2012 at a spatial resolution of 30 meters. The tropics were the only climate domain to exhibit a trend, with forest loss increasing by 2101 square kilometers per year. Brazil's well-documented reduction in deforestation was offset by increasing forest loss in Indonesia, Malaysia, Paraguay, Bolivia, Zambia, Angola, and elsewhere. Intensive forestry practiced within subtropical forests resulted in the highest rates of forest change globally. Boreal forest loss due largely to fire and forestry was second to that in the tropics in absolute and proportional terms. These results depict a globally consistent and locally relevant record of forest change.
From least squares to multilevel modeling: A graphical introduction to Bayesian inference
NASA Astrophysics Data System (ADS)
Loredo, Thomas J.
2016-01-01
This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.
Yehia, Ali M; Mohamed, Heba M
2016-01-05
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.
CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries
NASA Technical Reports Server (NTRS)
Cutler, Andrew D.; Magnotti, Gaetano
2010-01-01
The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.
NASA Astrophysics Data System (ADS)
Ying, Yibin; Liu, Yande; Tao, Yang
2005-09-01
This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r) 0.940 for the SSC and a moderate r of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSC and also showed the feasibility of using it to predict the VA of apples.
Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.
ERIC Educational Resources Information Center
Kiers, Henk A. L.
1997-01-01
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)
Huang, Xinchuan; Schwenke, David W; Lee, Timothy J
2011-01-28
In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
Basalekou, M.; Pappas, C.; Kotseridis, Y.; Tarantilis, P. A.; Kontaxakis, E.
2017-01-01
Color, phenolic content, and chemical age values of red wines made from Cretan grape varieties (Kotsifali, Mandilari) were evaluated over nine months of maturation in different containers for two vintages. The wines differed greatly on their anthocyanin profiles. Mid-IR spectra were also recorded with the use of a Fourier Transform Infrared Spectrophotometer in ZnSe disk mode. Analysis of Variance was used to explore the parameter's dependency on time. Determination models were developed for the chemical age indexes using Partial Least Squares (PLS) (TQ Analyst software) considering the spectral region 1830–1500 cm−1. The correlation coefficients (r) for chemical age index i were 0.86 for Kotsifali (Root Mean Square Error of Calibration (RMSEC) = 0.067, Root Mean Square Error of Prediction (RMSEP) = 0,115, and Root Mean Square Error of Validation (RMSECV) = 0.164) and 0.90 for Mandilari (RMSEC = 0.050, RMSEP = 0.040, and RMSECV = 0.089). For chemical age index ii the correlation coefficients (r) were 0.86 and 0.97 for Kotsifali (RMSEC 0.044, RMSEP = 0.087, and RMSECV = 0.214) and Mandilari (RMSEC = 0.024, RMSEP = 0.033, and RMSECV = 0.078), respectively. The proposed method is simpler, less time consuming, and more economical and does not require chemical reagents. PMID:29225994
Application of near-infrared spectroscopy for the rapid quality assessment of Radix Paeoniae Rubra
NASA Astrophysics Data System (ADS)
Zhan, Hao; Fang, Jing; Tang, Liying; Yang, Hongjun; Li, Hua; Wang, Zhuju; Yang, Bin; Wu, Hongwei; Fu, Meihong
2017-08-01
Near-infrared (NIR) spectroscopy with multivariate analysis was used to quantify gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra, and the feasibility to classify the samples originating from different areas was investigated. A new high-performance liquid chromatography method was developed and validated to analyze gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra as the reference. Partial least squares (PLS), principal component regression (PCR), and stepwise multivariate linear regression (SMLR) were performed to calibrate the regression model. Different data pretreatments such as derivatives (1st and 2nd), multiplicative scatter correction, standard normal variate, Savitzky-Golay filter, and Norris derivative filter were applied to remove the systematic errors. The performance of the model was evaluated according to the root mean square of calibration (RMSEC), root mean square error of prediction (RMSEP), root mean square error of cross-validation (RMSECV), and correlation coefficient (r). The results show that compared to PCR and SMLR, PLS had a lower RMSEC, RMSECV, and RMSEP and higher r for all the four analytes. PLS coupled with proper pretreatments showed good performance in both the fitting and predicting results. Furthermore, the original areas of Radix Paeoniae Rubra samples were partly distinguished by principal component analysis. This study shows that NIR with PLS is a reliable, inexpensive, and rapid tool for the quality assessment of Radix Paeoniae Rubra.
Koláčková, Pavla; Růžičková, Gabriela; Gregor, Tomáš; Šišperová, Eliška
2015-08-30
Calibration models for the Fourier transform-near infrared (FT-NIR) instrument were developed for quick and non-destructive determination of oil and fatty acids in whole achenes of milk thistle. Samples with a range of oil and fatty acid levels were collected and their transmittance spectra were obtained by the FT-NIR instrument. Based on these spectra and data gained by the means of the reference method - Soxhlet extraction and gas chromatography (GC) - calibration models were created by means of partial least square (PLS) regression analysis. Precision and accuracy of the calibration models was verified via the cross-validation of validation samples whose spectra were not part of the calibration model and also according to the root mean square error of prediction (RMSEP), root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV) and the validation coefficient of determination (R(2) ). R(2) for whole seeds were 0.96, 0.96, 0.83 and 0.67 and the RMSEP values were 0.76, 1.68, 1.24, 0.54 for oil, linoleic (C18:2), oleic (C18:1) and palmitic (C16:0) acids, respectively. The calibration models are appropriate for the non-destructive determination of oil and fatty acids levels in whole seeds of milk thistle. © 2014 Society of Chemical Industry.
da Silva, Fabiana E B; Flores, Érico M M; Parisotto, Graciele; Müller, Edson I; Ferrão, Marco F
2016-03-01
An alternative method for the quantification of sulphametoxazole (SMZ) and trimethoprim (TMP) using diffuse reflectance infrared Fourier-transform spectroscopy (DRIFTS) and partial least square regression (PLS) was developed. Interval Partial Least Square (iPLS) and Synergy Partial Least Square (siPLS) were applied to select a spectral range that provided the lowest prediction error in comparison to the full-spectrum model. Fifteen commercial tablet formulations and forty-nine synthetic samples were used. The ranges of concentration considered were 400 to 900 mg g-1SMZ and 80 to 240 mg g-1 TMP. Spectral data were recorded between 600 and 4000 cm-1 with a 4 cm-1 resolution by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The proposed procedure was compared to high performance liquid chromatography (HPLC). The results obtained from the root mean square error of prediction (RMSEP), during the validation of the models for samples of sulphamethoxazole (SMZ) and trimethoprim (TMP) using siPLS, demonstrate that this approach is a valid technique for use in quantitative analysis of pharmaceutical formulations. The selected interval algorithm allowed building regression models with minor errors when compared to the full spectrum PLS model. A RMSEP of 13.03 mg g-1for SMZ and 4.88 mg g-1 for TMP was obtained after the selection the best spectral regions by siPLS.
NASA Technical Reports Server (NTRS)
Choe, C. Y.; Tapley, B. D.
1975-01-01
A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.
A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Cheevatanarak, Suchittra
Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…
Phase modulation for reduced vibration sensitivity in laser-cooled clocks in space
NASA Technical Reports Server (NTRS)
Klipstein, W.; Dick, G.; Jefferts, S.; Walls, F.
2001-01-01
The standard interrogation technique in atomic beam clocks is square-wave frequency modulation (SWFM), which suffers a first order sensitivity to vibrations as changes in the transit time of the atoms translates to perceived frequency errors. Square-wave phase modulation (SWPM) interrogation eliminates sensitivity to this noise.
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
NASA Astrophysics Data System (ADS)
See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.
2018-04-01
This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.
2009-07-16
0.25 0.26 -0.85 1 SSR SSE R SSTO SSTO = = − 2 2 ˆ( ) : Regression sum of square, ˆwhere : mean value, : value from the fitted line ˆ...Error sum of square : Total sum of square i i i i SSR Y Y Y Y SSE Y Y SSTO SSE SSR = − = − = + ∑ ∑ Statistical analysis: Coefficient of correlation
Analysis of counting errors in the phase/Doppler particle analyzer
NASA Technical Reports Server (NTRS)
Oldenburg, John R.
1987-01-01
NASA is investigating the application of the Phase Doppler measurement technique to provide improved drop sizing and liquid water content measurements in icing research. The magnitude of counting errors were analyzed because these errors contribute to inaccurate liquid water content measurements. The Phase Doppler Particle Analyzer counting errors due to data transfer losses and coincidence losses were analyzed for data input rates from 10 samples/sec to 70,000 samples/sec. Coincidence losses were calculated by determining the Poisson probability of having more than one event occurring during the droplet signal time. The magnitude of the coincidence loss can be determined, and for less than a 15 percent loss, corrections can be made. The data transfer losses were estimated for representative data transfer rates. With direct memory access enabled, data transfer losses are less than 5 percent for input rates below 2000 samples/sec. With direct memory access disabled losses exceeded 20 percent at a rate of 50 samples/sec preventing accurate number density or mass flux measurements. The data transfer losses of a new signal processor were analyzed and found to be less than 1 percent for rates under 65,000 samples/sec.
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Robustness study of the pseudo open-loop controller for multiconjugate adaptive optics.
Piatrou, Piotr; Gilles, Luc
2005-02-20
Robustness of the recently proposed "pseudo open-loop control" algorithm against various system errors has been investigated for the representative example of the Gemini-South 8-m telescope multiconjugate adaptive-optics system. The existing model to represent the adaptive-optics system with pseudo open-loop control has been modified to account for misalignments, noise and calibration errors in deformable mirrors, and wave-front sensors. Comparison with the conventional least-squares control model has been done. We show with the aid of both transfer-function pole-placement analysis and Monte Carlo simulations that POLC remains remarkably stable and robust against very large levels of system errors and outperforms in this respect least-squares control. Approximate stability margins as well as performance metrics such as Strehl ratios and rms wave-front residuals averaged over a 1-arc min field of view have been computed for different types and levels of system errors to quantify the expected performance degradation.
A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields
NASA Astrophysics Data System (ADS)
Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang
2017-03-01
Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.
Analysis of S-box in Image Encryption Using Root Mean Square Error Method
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-07-01
The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Error-Based Design Space Windowing
NASA Technical Reports Server (NTRS)
Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman
2002-01-01
Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...
2018-01-31
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series
NASA Astrophysics Data System (ADS)
Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team
2011-01-01
In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.
Error in telemetry studies: Effects of animal movement on triangulation
Schmutz, Joel A.; White, Gary C.
1990-01-01
We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.
Fuzzy-Estimation Control for Improvement Microwave Connection for Iraq Electrical Grid
NASA Astrophysics Data System (ADS)
Hoomod, Haider K.; Radi, Mohammed
2018-05-01
The demand for broadband wireless services is increasing day by day (as internet or radio broadcast and TV etc.) for this reason and optimal exploiting for this bandwidth may be other reasons indeed be there is problem in the communication channels. it’s necessary that exploiting the good part form this bandwidth. In this paper, we propose to use estimation technique for estimate channel availability in that moment and next one to know the error in the bandwidth channel for controlling the possibility data transferring through the channel. The proposed estimation based on the combination of the least Minimum square (LMS), Standard Kalman filter, and Modified Kalman filter. The error estimation in channel use as control parameter in fuzzy rules to adjusted the rate and size sending data through the network channel, and rearrangement the priorities of the buffered data (workstation control parameters, Texts, phone call, images, and camera video) for the worst cases of error in channel. The propose system is designed to management data communications through the channels connect among the Iraqi electrical grid stations. The proposed results show that the modified Kalman filter have a best result in time and noise estimation (0.1109 for 5% noise estimation to 0.3211 for 90% noise estimation) and the packets loss rate is reduced with ratio from (35% to 385%).
The theory precision analyse of RFM localization of satellite remote sensing imagery
NASA Astrophysics Data System (ADS)
Zhang, Jianqing; Xv, Biao
2009-11-01
The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.
NASA Technical Reports Server (NTRS)
Whitlock, C. H., III
1977-01-01
Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.
Loss Tolerance in One-Way Quantum Computation via Counterfactual Error Correction
NASA Astrophysics Data System (ADS)
Varnava, Michael; Browne, Daniel E.; Rudolph, Terry
2006-09-01
We introduce a scheme for fault tolerantly dealing with losses (or other “leakage” errors) in cluster state computation that can tolerate up to 50% qubit loss. This is achieved passively using an adaptive strategy of measurement—no coherent measurements or coherent correction is required. Since the scheme relies on inferring information about what would have been the outcome of a measurement had one been able to carry it out, we call this counterfactual error correction.
NASA Astrophysics Data System (ADS)
Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng
2018-02-01
A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.
RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.
Accuracy of least-squares methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bochev, Pavel B.; Gunzburger, Max D.
1993-01-01
Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ying Yibin; Liu Yande; Tao Yang
2005-09-01
This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r{sup 2}) 0.940 for the SSC and a moderate r{sup 2} of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSCmore » and also showed the feasibility of using it to predict the VA of apples.« less
Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned
NASA Technical Reports Server (NTRS)
Heck, M. L.; Findlay, J. T.; Compton, H. R.
1983-01-01
The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
Reliable and accurate extraction of Hamaker constants from surface force measurements.
Miklavcic, S J
2018-08-15
A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.
Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.
Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less
[Gaussian process regression and its application in near-infrared spectroscopy analysis].
Feng, Ai-Ming; Fang, Li-Min; Lin, Min
2011-06-01
Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.
Optimal least-squares finite element method for elliptic problems
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Povinelli, Louis A.
1991-01-01
An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.
Performance of the S - [chi][squared] Statistic for Full-Information Bifactor Models
ERIC Educational Resources Information Center
Li, Ying; Rupp, Andre A.
2011-01-01
This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…
F-Test Alternatives to Fisher's Exact Test and to the Chi-Square Test of Homogeneity in 2x2 Tables.
ERIC Educational Resources Information Center
Overall, John E.; Starbuck, Robert R.
1983-01-01
An alternative to Fisher's exact test and the chi-square test for homogeneity in two-by-two tables is developed. The method provides for Type I error rates which are closer to the stated alpha level than either of the alternatives. (JKS)
Multilevel Modeling and Ordinary Least Squares Regression: How Comparable Are They?
ERIC Educational Resources Information Center
Huang, Francis L.
2018-01-01
Studies analyzing clustered data sets using both multilevel models (MLMs) and ordinary least squares (OLS) regression have generally concluded that resulting point estimates, but not the standard errors, are comparable with each other. However, the accuracy of the estimates of OLS models is important to consider, as several alternative techniques…
Discrete Tchebycheff orthonormal polynomials and applications
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.
Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy
NASA Astrophysics Data System (ADS)
Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun
This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.
NASA Astrophysics Data System (ADS)
Deng, Xiao; Ma, Tianyu; Lecomte, Roger; Yao, Rutao
2011-10-01
To expand the availability of SPECT for biomedical research, we developed a SPECT imaging system on an existing animal PET detector by adding a slit-slat collimator. As the detector crystals are pixelated, the relative slat-to-crystal position (SCP) in the axial direction affects the photon flux distribution onto the crystals. The accurate knowledge of SCP is important to the axial resolution and sensitivity of the system. This work presents a method for optimizing SCP in system design and for determining SCP in system geometrical calibration. The optimization was achieved by finding the SCP that provides higher spatial resolution in terms of average-root-mean-square (R̅M̅S̅) width of the axial point spread function (PSF) without loss of sensitivity. The calibration was based on the least-square-error method that minimizes the difference between the measured and modeled axial point spread projections. The uniqueness and accuracy of the calibration results were validated through a singular value decomposition (SVD) based approach. Both the optimization and calibration techniques were evaluated with Monte Carlo (MC) simulated data. We showed that the [R̅M̅S̅] was improved about 15% with the optimal SCP as compared to the least-optimal SCP, and system sensitivity was not affected by SCP. The SCP error achieved by the proposed calibration method was less than 0.04 mm. The calibrated SCP value was used in MC simulation to generate the system matrix which was used for image reconstruction. The images of simulated phantoms showed the expected resolution performance and were artifact free. We conclude that the proposed optimization and calibration method is effective for the slit-slat collimator based SPECT systems.
Optical PAyload for Lasercomm Science (OPALS) link validation
NASA Technical Reports Server (NTRS)
Biswas, Abhijit; Oaida, Bogdan V.; Andrews, Kenneth S.; Kovalik, Joseph M.; Abrahamson, Matthew J.; Wright, Malcolm W.
2015-01-01
Recently several day and nighttime links under diverse atmospheric conditions were completed using the Optical Payload for Lasercomm Science (OPALS) flight system on-board the International Space Station (ISS). In this paper we compare measured optical power and its variance at either end of the link with predictions that include atmospheric propagation models. For the 976 nm laser beacon mean power transmitted from the ground to the ISS the predicted mean irradiance of 10's of microwatts per square meter close to zenith and its decrease with range and increased air mass shows good agreement with predictions. The irradiance fluctuations sampled at 100 Hz also follow the expected increase in scintillation with air mass representative of atmospheric coherence lengths at zenith at 500 nm in the 3-8 cm range. The downlink predicted power of 100's of nanowatts was also reconciled within the uncertainty of the atmospheric losses. Expected link performance with uncoded bit-error rates less than 1E-4 required for the Reed-Solomon code to correct errors for video, text and file transmission was verified. The results of predicted and measured powers and fluctuations suggest the need for further study and refinement.
Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.
Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179
NASA Astrophysics Data System (ADS)
Hernández, Mario R.; Francés, Félix
2015-04-01
One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.
Determining the Uncertainty of X-Ray Absorption Measurements
Wojcik, Gary S.
2004-01-01
X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627
Pappas, Christos; Kyraleou, Maria; Voskidi, Eleni; Kotseridis, Yorgos; Taranilis, Petros A; Kallithraka, Stamatina
2015-02-01
The direct and simultaneous quantitative determination of the mean degree of polymerization (mDP) and the degree of galloylation (%G) in grape seeds were quantified using diffuse reflectance infrared Fourier transform spectroscopy and partial least squares (PLS). The results were compared with those obtained using the conventional analysis employing phloroglucinolysis as pretreatment followed by high performance liquid chromatography-UV and mass spectrometry detection. Infrared spectra were recorded in solid state samples after freeze drying. The 2nd derivative of the 1832 to 1416 and 918 to 739 cm(-1) spectral regions for the quantification of mDP, the 2nd derivative of the 1813 to 607 cm(-1) spectral region for the degree of %G determination and PLS regression were used. The determination coefficients (R(2) ) of mDP and %G were 0.99 and 0.98, respectively. The corresponding values of the root-mean-square error of calibration were found 0.506 and 0.692, the root-mean-square error of cross validation 0.811 and 0.921, and the root-mean-square error of prediction 0.612 and 0.801. The proposed method in comparison with the conventional method is simpler, less time consuming, more economical, and requires reduced quantities of chemical reagents and fewer sample pretreatment steps. It could be a starting point for the design of more specific models according to the requirements of the wineries. © 2015 Institute of Food Technologists®
Durakli Velioglu, Serap; Ercioglu, Elif; Boyaci, Ismail Hakki
2017-05-01
This research paper describes the potential of synchronous fluorescence (SF) spectroscopy for authentication of buffalo milk, a favourable raw material in the production of some premium dairy products. Buffalo milk is subjected to fraudulent activities like many other high priced foodstuffs. The current methods widely used for the detection of adulteration of buffalo milk have various disadvantages making them unattractive for routine analysis. Thus, the aim of the present study was to assess the potential of SF spectroscopy in combination with multivariate methods for rapid discrimination between buffalo and cow milk and detection of the adulteration of buffalo milk with cow milk. SF spectra of cow and buffalo milk samples were recorded between 400-550 nm excitation range with Δλ of 10-100 nm, in steps of 10 nm. The data obtained for ∆λ = 10 nm were utilised to classify the samples using principal component analysis (PCA), and detect the adulteration level of buffalo milk with cow milk using partial least square (PLS) methods. Successful discrimination of samples and detection of adulteration of buffalo milk with limit of detection value (LOD) of 6% are achieved with the models having root mean square error of calibration (RMSEC) and the root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP) values of 2, 7, and 4%, respectively. The results reveal the potential of SF spectroscopy for rapid authentication of buffalo milk.
Local Linear Regression for Data with AR Errors.
Li, Runze; Li, Yan
2009-07-01
In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.
A study of GPS measurement errors due to noise and multipath interference for CGADS
NASA Technical Reports Server (NTRS)
Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.
1996-01-01
This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.
Linear regression in astronomy. II
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.; Babu, Gutti J.
1992-01-01
A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.
NASA Astrophysics Data System (ADS)
Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.
2013-09-01
This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.
Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie
2007-11-01
Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.
Modeling phosphorus losses from soils amended with cattle manures and chemical fertilizers.
Wang, Zhaozhi; Zhang, T Q; Tan, C S; Vadas, P; Qi, Z M; Wellen, C
2018-05-22
While applied manure/fertilizer is an important source of P loss in surface runoff, few models simulate the direct transfer of phosphorus (P) from soil-surface-applied manure/fertilizer to surface runoff. The SurPhos model was tested with 2008-2010 growing season daily surface runoff data from clay loam experimental plots subject to different manure/fertilizer applications. Model performance was evaluated on the basis of the coefficient of determination (R 2 ), Nash-Sutcliffe efficiency (NSE), percent bias (PBIAS), and the ratio of the root mean square error to the standard deviation of observed values (RSR). The model offered an acceptable performance in simulating soil labile P dynamics (R 2 = 0.75, NSE = 0.55, PBIAS = 10.43%, and RSR = 0.67) and dissolved reactive P (DRP) loss in surface runoff (R 2 ≥ 0.74 and NSE ≥ 0.69) for both solid and liquid cattle manure, as well as inorganic fertilizer. Simulated direct P loss in surface runoff from solid and liquid cattle manure accounted for 39% and 40% of total growing season DRP losses in surface runoff. To compensate for the unavailability of daily surface runoff observations under snow melt condition, the whole four years' (2008-2011) daily surface runoff predicted by EPIC (Environmental Policy Integrated Climate) was used as SurPhos input. The accuracy of simulated DRP loss in surface runoff under the different manure/fertilizer treatments was acceptable (R 2 ≥ 0.55 and NSE ≥ 0.50). For the solid cattle manure treatment, of all annual DRP losses, 19% were derived directly from the manure. Beyond offering a reliable prediction of manure/fertilizer P loss in surface runoff, SurPhos quantified different sources of DRP loss and dynamic labile P in soil, allowing a better critical assessment of different P management measures' effectiveness in mitigating DRP losses. Copyright © 2018 Elsevier B.V. All rights reserved.
A priori discretization error metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar
2016-12-01
Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.
Modeling error analysis of stationary linear discrete-time filters
NASA Technical Reports Server (NTRS)
Patel, R.; Toda, M.
1977-01-01
The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.
NASA Technical Reports Server (NTRS)
Chang, Ching L.; Jiang, Bo-Nan
1990-01-01
A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.
Coupling finite element and spectral methods: First results
NASA Technical Reports Server (NTRS)
Bernardi, Christine; Debit, Naima; Maday, Yvon
1987-01-01
A Poisson equation on a rectangular domain is solved by coupling two methods: the domain is divided in two squares, a finite element approximation is used on the first square and a spectral discretization is used on the second one. Two kinds of matching conditions on the interface are presented and compared. In both cases, error estimates are proved.
Albustanji, Yusuf M; Albustanji, Mahmoud M; Hegazi, Mohamed M; Amayreh, Mousa M
2014-10-01
The purpose of this study was to assess prevalence and types of consonant production errors and phonological processes in Saudi Arabic-speaking children with repaired cleft lip and palate, and to determine the relationship between frequency of errors on one hand and the type of the cleft. Possible relationship between age, gender and frequency of errors was also investigated. Eighty Saudi children with repaired cleft lip and palate aged 6-15 years (mean 6.7 years), underwent speech, language, and hearing evaluation. The diagnosis of articulation deficits was based on the results of an Arabic articulation test. Phonological processes were reported based on the productivity scale of a minimum 20% of occurrence. Diagnosis of nasality was based on a 5-point scale that reflects severity from 0 through 4. All participants underwent intraoral examination, informal language assessment, and hearing evaluation to assess their speech and language abilities. The Chi-Square test for independence was used to analyze the results of consonant production as a function of type of CLP and age. Out of 80 participants with CLP, 21 participants had normal articulation and resonance, 59 of participants (74%) showed speech abnormalities. Twenty-one of these 59 participants showed only articulation errors; 17 showed only hypernasality; and 21 showed both articulation and resonance deficits. CAs were observed in 20 participant. The productive phonological processes were consonant backing, final consonant deletion, gliding, and stopping. At age 6 and older, 37% of participants had persisting hearing loss. Despite early age at time of surgery (mean 6.7 months) for the studied CLP participants in this study, a substantial number of them demonstrated articulation errors and hypernasality. The results showed desirable findings for diverse languages. It is especially interesting to consider the prevalence of glottal stops and pharyngeal fricatives in a population for whom these sound are phonemic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Building on crossvalidation for increasing the quality of geostatistical modeling
Olea, R.A.
2012-01-01
The random function is a mathematical model commonly used in the assessment of uncertainty associated with a spatially correlated attribute that has been partially sampled. There are multiple algorithms for modeling such random functions, all sharing the requirement of specifying various parameters that have critical influence on the results. The importance of finding ways to compare the methods and setting parameters to obtain results that better model uncertainty has increased as these algorithms have grown in number and complexity. Crossvalidation has been used in spatial statistics, mostly in kriging, for the analysis of mean square errors. An appeal of this approach is its ability to work with the same empirical sample available for running the algorithms. This paper goes beyond checking estimates by formulating a function sensitive to conditional bias. Under ideal conditions, such function turns into a straight line, which can be used as a reference for preparing measures of performance. Applied to kriging, deviations from the ideal line provide sensitivity to the semivariogram lacking in crossvalidation of kriging errors and are more sensitive to conditional bias than analyses of errors. In terms of stochastic simulation, in addition to finding better parameters, the deviations allow comparison of the realizations resulting from the applications of different methods. Examples show improvements of about 30% in the deviations and approximately 10% in the square root of mean square errors between reasonable starting modelling and the solutions according to the new criteria. ?? 2011 US Government.
Rosen, Sophia; Davidov, Ori
2012-07-20
Multivariate outcomes are often measured longitudinally. For example, in hearing loss studies, hearing thresholds for each subject are measured repeatedly over time at several frequencies. Thus, each patient is associated with a multivariate longitudinal outcome. The multivariate mixed-effects model is a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, it is known that hearing thresholds, at every frequency, increase with age. Moreover, this age-related threshold elevation is monotone in frequency, that is, the higher the frequency, the higher, on average, is the rate of threshold elevation. This means that there is a natural ordering among the different frequencies in the rate of hearing loss. In practice, this amounts to imposing a set of constraints on the different frequencies' regression coefficients modeling the mean effect of time and age at entry to the study on hearing thresholds. The aforementioned constraints should be accounted for in the analysis. The result is a multivariate longitudinal model with restricted parameters. We propose estimation and testing procedures for such models. We show that ignoring the constraints may lead to misleading inferences regarding the direction and the magnitude of various effects. Moreover, simulations show that incorporating the constraints substantially improves the mean squared error of the estimates and the power of the tests. We used this methodology to analyze a real hearing loss study. Copyright © 2012 John Wiley & Sons, Ltd.
Analysis of Students' Error in Learning of Quadratic Equations
ERIC Educational Resources Information Center
Zakaria, Effandi; Ibrahim; Maat, Siti Mistima
2010-01-01
The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…
Rosenblum, Uri; Melzer, Itshak
2017-01-01
About 90% of people with multiple sclerosis (PwMS) have gait instability and 50% fall. Reliable and clinically feasible methods of gait instability assessment are needed. The study investigated the reliability and validity of the Narrow Path Walking Test (NPWT) under single-task (ST) and dual-task (DT) conditions for PwMS. Thirty PwMS performed the NPWT on 2 different occasions, a week apart. Number of Steps, Trial Time, Trial Velocity, Step Length, Number of Step Errors, Number of Cognitive Task Errors, and Number of Balance Losses were measured. Intraclass correlation coefficients (ICC2,1) were calculated from the average values of NPWT parameters. Absolute reliability was quantified from standard error of measurement (SEM) and smallest real difference (SRD). Concurrent validity of NPWT with Functional Reach Test, Four Square Step Test (FSST), 12-item Multiple Sclerosis Walking Scale (MSWS-12), and 2 Minute Walking Test (2MWT) was determined using partial correlations. Intraclass correlation coefficients (ICCs) for most NPWT parameters during ST and DT ranged from 0.46-0.94 and 0.55-0.95, respectively. The highest relative reliability was found for Number of Step Errors (ICC = 0.94 and 0.93, for ST and DT, respectively) and Trial Velocity (ICC = 0.83 and 0.86, for ST and DT, respectively). Absolute reliability was high for Number of Step Errors in ST (SEM % = 19.53%) and DT (SEM % = 18.14%) and low for Trial Velocity in ST (SEM % = 6.88%) and DT (SEM % = 7.29%). Significant correlations for Number of Step Errors and Trial Velocity were found with FSST, MSWS-12, and 2MWT. In persons with PwMS performing the NPWT, Number of Step Errors and Trial Velocity were highly reliable parameters. Based on correlations with other measures of gait instability, Number of Step Errors was the most valid parameter of dynamic balance under the conditions of our test.Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, available at: http://links.lww.com/JNPT/A159).
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Discrete-time state estimation for stochastic polynomial systems over polynomial observations
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.
2018-07-01
This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.
Wang, Rong
2015-01-01
In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
Spectral combination of spherical gravitational curvature boundary-value problems
NASA Astrophysics Data System (ADS)
PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel
2018-04-01
Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.
Roso, V M; Schenkel, F S; Miller, S P; Schaeffer, L R
2005-08-01
Breed additive, dominance, and epistatic loss effects are of concern in the genetic evaluation of a multibreed population. Multiple regression equations used for fitting these effects may show a high degree of multicollinearity among predictor variables. Typically, when strong linear relationships exist, the regression coefficients have large SE and are sensitive to changes in the data file and to the addition or deletion of variables in the model. Generalized ridge regression methods were applied to obtain stable estimates of direct and maternal breed additive, dominance, and epistatic loss effects in the presence of multicollinearity among predictor variables. Preweaning weight gains of beef calves in Ontario, Canada, from 1986 to 1999 were analyzed. The genetic model included fixed direct and maternal breed additive, dominance, and epistatic loss effects, fixed environmental effects of age of the calf, contemporary group, and age of the dam x sex of the calf, random additive direct and maternal genetic effects, and random maternal permanent environment effect. The degree and the nature of the multicollinearity were identified and ridge regression methods were used as an alternative to ordinary least squares (LS). Ridge parameters were obtained using two different objective methods: 1) generalized ridge estimator of Hoerl and Kennard (R1); and 2) bootstrap in combination with cross-validation (R2). Both ridge regression methods outperformed the LS estimator with respect to mean squared error of predictions (MSEP) and variance inflation factors (VIF) computed over 100 bootstrap samples. The MSEP of R1 and R2 were similar, and they were 3% less than the MSEP of LS. The average VIF of LS, R1, and R2 were equal to 26.81, 6.10, and 4.18, respectively. Ridge regression methods were particularly effective in decreasing the multicollinearity involving predictor variables of breed additive effects. Because of a high degree of confounding between estimates of maternal dominance and direct epistatic loss effects, it was not possible to compare the relative importance of these effects with a high level of confidence. The inclusion of epistatic loss effects in the additive-dominance model did not cause noticeable reranking of sires, dams, and calves based on across-breed EBV. More precise estimates of breed effects as a result of this study may result in more stable across-breed estimated breeding values over the years.
Distribution of kriging errors, the implications and how to communicate them
NASA Astrophysics Data System (ADS)
Li, Hong Yi; Milne, Alice; Webster, Richard
2016-04-01
Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.
Distribution of kriging errors, the implications and how to communicate them
NASA Astrophysics Data System (ADS)
Li, HongYi; Milne, Alice; Webster, Richard
2015-04-01
Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.
Skinner, Kenneth D.
2009-01-01
Elevation data in riverine environments can be used in various applications for which different levels of accuracy are required. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging) - or EAARL - system was used to obtain topographic and bathymetric data along the lower Boise River, southwestern Idaho, for use in hydraulic and habitat modeling. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL data collection, real-time kinetic global positioning system and total station ground-survey data were collected in three areas within the lower Boise River basin to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived elevation data, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.082 to 0.138 m. Accuracies for bank, floodplain, and in-stream bathymetric data had root mean square errors ranging from 0.090 to 0.583 m. The greater root mean square errors for the latter data are the result of high levels of turbidity in the downstream ground-survey area, dense tree canopy, and horizontal location discrepancies between the EAARL and ground-survey data in steeply sloping areas such as riverbanks. The EAARL point to ground-survey comparisons produced results similar to those for the EAARL raster to ground-survey comparisons, indicating that the interpolation of the EAARL points to rasters did not introduce significant additional error. The mean percent error for the wetted cross-sectional areas of the two upstream ground-survey areas was 1 percent. The mean percent error increases to -18 percent if the downstream ground-survey area is included, reflecting the influence of turbidity in that area.
Recursive least-squares learning algorithms for neural networks
NASA Astrophysics Data System (ADS)
Lewis, Paul S.; Hwang, Jenq N.
1990-11-01
This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.
2012-01-01
The U.S. Geological Survey (USGS) maintains approximately 148 real-time streamgages in Iowa for which daily mean streamflow information is available, but daily mean streamflow data commonly are needed at locations where no streamgages are present. Therefore, the USGS conducted a study as part of a larger project in cooperation with the Iowa Department of Natural Resources to develop methods to estimate daily mean streamflow at locations in ungaged watersheds in Iowa by using two regression-based statistical methods. The regression equations for the statistical methods were developed from historical daily mean streamflow and basin characteristics from streamgages within the study area, which includes the entire State of Iowa and adjacent areas within a 50-mile buffer of Iowa in neighboring states. Results of this study can be used with other techniques to determine the best method for application in Iowa and can be used to produce a Web-based geographic information system tool to compute streamflow estimates automatically. The Flow Anywhere statistical method is a variation of the drainage-area-ratio method, which transfers same-day streamflow information from a reference streamgage to another location by using the daily mean streamflow at the reference streamgage and the drainage-area ratio of the two locations. The Flow Anywhere method modifies the drainage-area-ratio method in order to regionalize the equations for Iowa and determine the best reference streamgage from which to transfer same-day streamflow information to an ungaged location. Data used for the Flow Anywhere method were retrieved for 123 continuous-record streamgages located in Iowa and within a 50-mile buffer of Iowa. The final regression equations were computed by using either left-censored regression techniques with a low limit threshold set at 0.1 cubic feet per second (ft3/s) and the daily mean streamflow for the 15th day of every other month, or by using an ordinary-least-squares multiple linear regression method and the daily mean streamflow for the 15th day of every other month. The Flow Duration Curve Transfer method was used to estimate unregulated daily mean streamflow from the physical and climatic characteristics of gaged basins. For the Flow Duration Curve Transfer method, daily mean streamflow quantiles at the ungaged site were estimated with the parameter-based regression model, which results in a continuous daily flow-duration curve (the relation between exceedance probability and streamflow for each day of observed streamflow) at the ungaged site. By the use of a reference streamgage, the Flow Duration Curve Transfer is converted to a time series. Data used in the Flow Duration Curve Transfer method were retrieved for 113 continuous-record streamgages in Iowa and within a 50-mile buffer of Iowa. The final statewide regression equations for Iowa were computed by using a weighted-least-squares multiple linear regression method and were computed for the 0.01-, 0.05-, 0.10-, 0.15-, 0.20-, 0.30-, 0.40-, 0.50-, 0.60-, 0.70-, 0.80-, 0.85-, 0.90-, and 0.95-exceedance probability statistics determined from the daily mean streamflow with a reporting limit set at 0.1 ft3/s. The final statewide regression equation for Iowa computed by using left-censored regression techniques was computed for the 0.99-exceedance probability statistic determined from the daily mean streamflow with a low limit threshold and a reporting limit set at 0.1 ft3/s. For the Flow Anywhere method, results of the validation study conducted by using six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 1,016 to 138 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 1,690 to 237 ft3/s. Values of the percent root-mean-square error ranged from 115 percent to 26.2 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and estimated streamflow, streamflows appear to be substantially underestimated for much of the time period. Estimated cumulative streamflow for the period October 1, 2004, to September 30, 2009, are underestimated by -9.3 and -22.7 percent for the closest and poorest comparisons, respectively.
A fast least-squares algorithm for population inference
2013-01-01
Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408
A fast least-squares algorithm for population inference.
Parry, R Mitchell; Wang, May D
2013-01-23
Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.
Moreira, Luiz Felipe Pompeu Prado; Ferrari, Adriana Cristina; Moraes, Tiago Bueno; Reis, Ricardo Andrade; Colnago, Luiz Alberto; Pereira, Fabíola Manhas Verbi
2016-05-19
Time-domain nuclear magnetic resonance and chemometrics were used to predict color parameters, such as lightness (L*), redness (a*), and yellowness (b*) of beef (Longissimus dorsi muscle) samples. Analyzing the relaxation decays with multivariate models performed with partial least-squares regression, color quality parameters were predicted. The partial least-squares models showed low errors independent of the sample size, indicating the potentiality of the method. Minced procedure and weighing were not necessary to improve the predictive performance of the models. The reduction of transverse relaxation time (T 2 ) measured by Carr-Purcell-Meiboom-Gill pulse sequence in darker beef in comparison with lighter ones can be explained by the lower relaxivity Fe 2+ present in deoxymyoglobin and oxymyoglobin (red beef) to the higher relaxivity of Fe 3+ present in metmyoglobin (brown beef). These results point that time-domain nuclear magnetic resonance spectroscopy can become a useful tool for quality assessment of beef cattle on bulk of the sample and through-packages, because this technique is also widely applied to measure sensorial parameters, such as flavor, juiciness and tenderness, and physicochemical parameters, cooking loss, fat and moisture content, and instrumental tenderness using Warner Bratzler shear force. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi
2015-01-01
Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
Ultimate limits for quantum magnetometry via time-continuous measurements
NASA Astrophysics Data System (ADS)
Albarelli, Francesco; Rossi, Matteo A. C.; Paris, Matteo G. A.; Genoni, Marco G.
2017-12-01
We address the estimation of the magnetic field B acting on an ensemble of atoms with total spin J subjected to collective transverse noise. By preparing an initial spin coherent state, for any measurement performed after the evolution, the mean-square error of the estimate is known to scale as 1/J, i.e. no quantum enhancement is obtained. Here, we consider the possibility of continuously monitoring the atomic environment, and conclusively show that strategies based on time-continuous non-demolition measurements followed by a final strong measurement may achieve Heisenberg-limited scaling 1/{J}2 and also a monitoring-enhanced scaling in terms of the interrogation time. We also find that time-continuous schemes are robust against detection losses, as we prove that the quantum enhancement can be recovered also for finite measurement efficiency. Finally, we analytically prove the optimality of our strategy.
da Silva, Neirivaldo Cavalcante; Honorato, Ricardo Saldanha; Pimentel, Maria Fernanda; Garrigues, Salvador; Cervera, Maria Luisa; de la Guardia, Miguel
2015-09-01
There is an increasing demand for herbal medicines in weight loss treatment. Some synthetic chemicals, such as sibutramine (SB), have been detected as adulterants in herbal formulations. In this study, two strategies using near infrared (NIR) spectroscopy have been developed to evaluate potential adulteration of herbal medicines with SB: a qualitative screening approach and a quantitative methodology based on multivariate calibration. Samples were composed by products commercialized as herbal medicines, as well as by laboratory adulterated samples. Spectra were obtained in the range of 14,000-4000 per cm. Using PLS-DA, a correct classification of 100% was achieved for the external validation set. In the quantitative approach, the root mean squares error of prediction (RMSEP), for both PLS and MLR models, was 0.2% w/w. The results prove the potential of NIR spectroscopy and multivariate calibration in quantifying sibutramine in adulterated herbal medicines samples. © 2015 American Academy of Forensic Sciences.
NASA Technical Reports Server (NTRS)
Stankiewicz, N.
1982-01-01
The multiple channel input signal to a soft limiter amplifier as a traveling wave tube is represented as a finite, linear sum of Gaussian functions in the frequency domain. Linear regression is used to fit the channel shapes to a least squares residual error. Distortions in output signal, namely intermodulation products, are produced by the nonlinear gain characteristic of the amplifier and constitute the principal noise analyzed in this study. The signal to noise ratios are calculated for various input powers from saturation to 10 dB below saturation for two specific distributions of channels. A criterion for the truncation of the series expansion of the nonlinear transfer characteristic is given. It is found that he signal to noise ratios are very sensitive to the coefficients used in this expansion. Improper or incorrect truncation of the series leads to ambiguous results in the signal to noise ratios.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
Study of Factors Related to Army Delayed-Entry Program Attrition
1985-11-01
level, gender , and tenure in DEP. Military classification and assignment are determined almost solely on cognitive factors, physical examinations...Between Gender and Responses to Question 13 for Voluntary DEP Losses . . . . . , . . . , . . . . o , o . ° . 99 P-2. Chi-square Tests for Independence...Between Gender and Responses to Question 13 for DHP Aooession/ Voluntary Active Duty Losses . . 9 0 4 . 0 0 0 0 a a 0 106 B-9. Chi-square Tests for
Implementation of neural network for color properties of polycarbonates
NASA Astrophysics Data System (ADS)
Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.
2014-05-01
In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.
Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower
NASA Astrophysics Data System (ADS)
Fujii, Kenzo; Yamamoto, Toru
In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.
Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui
2015-05-01
Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.
Robust Variable Selection with Exponential Squared Loss.
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-04-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.
Robust Variable Selection with Exponential Squared Loss
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-01-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996
Lee, J S; Kawakubo, K; Miyamoto, K; Sasaki, M
1999-05-01
Body weight change (gain or loss) and fluctuation have been associated with an increased risk of cardiovascular disease and death from all causes. In this study, to determinate the effects of weight change and fluctuation on coronary risk factors (CRF) in Japanese, 242 middle-aged men aged 35-58 years, white-collar workers in Tokyo, were followed for 6-years (1990-1995). Height body weight, systolic (SBP) and diastolic (DBP) blood pressure, total cholesterol (TC), high (HDL-C) and low (LDL-C) density lipoprotein cholesterol, triglyceride (TG), and fasting plasma glucose (FPG) were measured each year at an annual health check-up. Over the 6 time periods, body mass index (BMI: body weight (kg) divided by the squared of the height (m)) mean was calculated as a value for the variable BMI level of each subject (BMImean). Direction and magnitude of the change in a subject's BMI was determined by regression of 6 BMI value on time for each individual slope (BMIslope). BMI fluctuation was defined as the root mean square error (BMIRMSE) of a regression line fitted to each individual's BMI value over time. The slopes of the 7 CRF were calculated using each regression equation over time. The CRF slopes were most strongly related to the baseline value of each risk factor, BMImean and BMIslope. BMIRMSE was not related to CRF slopes. In this population, BMI fluctuation had little impact on CRF compared with BMIslope and BMImean. This study results indicate that weight gain and loss has a much greater effect on CRF than does weight fluctuation in Japanese middle-aged men.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
Identifying model error in metabolic flux analysis - a generalized least squares approach.
Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G
2016-09-13
The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.
Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.
ERIC Educational Resources Information Center
Parshall, Cynthia G.; Kromrey, Jeffrey D.
1996-01-01
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
ERIC Educational Resources Information Center
Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.
2011-01-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
Liu, Xue-song; Sun, Fen-fang; Jin, Ye; Wu, Yong-jiang; Gu, Zhi-xin; Zhu, Li; Yan, Dong-lan
2015-12-01
A novel method was developed for the rapid determination of multi-indicators in corni fructus by means of near infrared (NIR) spectroscopy. Particle swarm optimization (PSO) based least squares support vector machine was investigated to increase the levels of quality control. The calibration models of moisture, extractum, morroniside and loganin were established using the PSO-LS-SVM algorithm. The performance of PSO-LS-SVM models was compared with partial least squares regression (PLSR) and back propagation artificial neural network (BP-ANN). The calibration and validation results of PSO-LS-SVM were superior to both PLS and BP-ANN. For PSO-LS-SVM models, the correlation coefficients (r) of calibrations were all above 0.942. The optimal prediction results were also achieved by PSO-LS-SVM models with the RMSEP (root mean square error of prediction) and RSEP (relative standard errors of prediction) less than 1.176 and 15.5% respectively. The results suggest that PSO-LS-SVM algorithm has a good model performance and high prediction accuracy. NIR has a potential value for rapid determination of multi-indicators in Corni Fructus.
Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas
2012-01-01
Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
The GEOS Ozone Data Assimilation System: Specification of Error Statistics
NASA Technical Reports Server (NTRS)
Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.
2000-01-01
A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.
Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Morita, Kazuki
2018-03-01
We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.
A Multivariate Quality Loss Function Approach for Optimization of Spinning Processes
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Mitra, Ankan
2018-05-01
Recent advancements in textile industry have given rise to several spinning techniques, such as ring spinning, rotor spinning etc., which can be used to produce a wide variety of textile apparels so as to fulfil the end requirements of the customers. To achieve the best out of these processes, they should be utilized at their optimal parametric settings. However, in presence of multiple yarn characteristics which are often conflicting in nature, it becomes a challenging task for the spinning industry personnel to identify the best parametric mix which would simultaneously optimize all the responses. Hence, in this paper, the applicability of a new systematic approach in the form of multivariate quality loss function technique is explored for optimizing multiple quality characteristics of yarns while identifying the ideal settings of two spinning processes. It is observed that this approach performs well against the other multi-objective optimization techniques, such as desirability function, distance function and mean squared error methods. With slight modifications in the upper and lower specification limits of the considered quality characteristics, and constraints of the non-linear optimization problem, it can be successfully applied to other processes in textile industry to determine their optimal parametric settings.
Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Morita, Kazuki
2018-06-01
We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.
Expanded prediction equations of human sweat loss and water needs.
Gonzalez, R R; Cheuvront, S N; Montain, S J; Goodman, D A; Blanchard, L A; Berglund, L G; Sawka, M N
2009-08-01
The Institute of Medicine expressed a need for improved sweating rate (msw) prediction models that calculate hourly and daily water needs based on metabolic rate, clothing, and environment. More than 25 years ago, the original Shapiro prediction equation (OSE) was formulated as msw (g.m(-2).h(-1))=27.9.Ereq.(Emax)(-0.455), where Ereq is required evaporative heat loss and Emax is maximum evaporative power of the environment; OSE was developed for a limited set of environments, exposures times, and clothing systems. Recent evidence shows that OSE often overpredicts fluid needs. Our study developed a corrected OSE and a new msw prediction equation by using independent data sets from a wide range of environmental conditions, metabolic rates (rest to
Lognormal Kalman filter for assimilating phase space density data in the radiation belts
NASA Astrophysics Data System (ADS)
Kondrashov, D.; Ghil, M.; Shprits, Y.
2011-11-01
Data assimilation combines a physical model with sparse observations and has become an increasingly important tool for scientists and engineers in the design, operation, and use of satellites and other high-technology systems in the near-Earth space environment. Of particular importance is predicting fluxes of high-energy particles in the Van Allen radiation belts, since these fluxes can damage spaceborne platforms and instruments during strong geomagnetic storms. In transiting from a research setting to operational prediction of these fluxes, improved data assimilation is of the essence. The present study is motivated by the fact that phase space densities (PSDs) of high-energy electrons in the outer radiation belt—both simulated and observed—are subject to spatiotemporal variations that span several orders of magnitude. Standard data assimilation methods that are based on least squares minimization of normally distributed errors may not be adequate for handling the range of these variations. We propose herein a modification of Kalman filtering that uses a log-transformed, one-dimensional radial diffusion model for the PSDs and includes parameterized losses. The proposed methodology is first verified on model-simulated, synthetic data and then applied to actual satellite measurements. When the model errors are sufficiently smaller then observational errors, our methodology can significantly improve analysis and prediction skill for the PSDs compared to those of the standard Kalman filter formulation. This improvement is documented by monitoring the variance of the innovation sequence.
Allegrini, Franco; Braga, Jez W B; Moreira, Alessandro C O; Olivieri, Alejandro C
2018-06-29
A new multivariate regression model, named Error Covariance Penalized Regression (ECPR) is presented. Following a penalized regression strategy, the proposed model incorporates information about the measurement error structure of the system, using the error covariance matrix (ECM) as a penalization term. Results are reported from both simulations and experimental data based on replicate mid and near infrared (MIR and NIR) spectral measurements. The results for ECPR are better under non-iid conditions when compared with traditional first-order multivariate methods such as ridge regression (RR), principal component regression (PCR) and partial least-squares regression (PLS). Copyright © 2018 Elsevier B.V. All rights reserved.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images
NASA Astrophysics Data System (ADS)
LeAnder, Robert; Chowdary, Myneni Sushma; Mokkapati, Swapnasri; Umbaugh, Scott E.
2008-03-01
Effective timing and treatment are critical to saving the sight of patients with diabetes. Lack of screening, as well as a shortage of ophthalmologists, help contribute to approximately 8,000 cases per year of people who lose their sight to diabetic retinopathy, the leading cause of new cases of blindness [1] [2]. Timely treatment for diabetic retinopathy prevents severe vision loss in over 50% of eyes tested [1]. Fundus images can provide information for detecting and monitoring eye-related diseases, like diabetic retinopathy, which if detected early, may help prevent vision loss. Damaged blood vessels can indicate the presence of diabetic retinopathy [9]. So, early detection of damaged vessels in retinal images can provide valuable information about the presence of disease, thereby helping to prevent vision loss. Purpose: The purpose of this study was to compare the effectiveness of two blood vessel segmentation algorithms. Methods: Fifteen fundus images from the STARE database were used to develop two algorithms using the CVIPtools software environment. Another set of fifteen images were derived from the first fifteen and contained ophthalmologists' hand-drawn tracings over the retinal vessels. The ophthalmologists' tracings were used as the "gold standard" for perfect segmentation and compared with the segmented images that were output by the two algorithms. Comparisons between the segmented and the hand-drawn images were made using Pratt's Figure of Merit (FOM), Signal-to-Noise Ratio (SNR) and Root Mean Square (RMS) Error. Results: Algorithm 2 has an FOM that is 10% higher than Algorithm 1. Algorithm 2 has a 6%-higher SNR than Algorithm 1. Algorithm 2 has only 1.3% more RMS error than Algorithm 1. Conclusions: Algorithm 1 extracted most of the blood vessels with some missing intersections and bifurcations. Algorithm 2 extracted all the major blood vessels, but eradicated some vessels as well. Algorithm 2 outperformed Algorithm 1 in terms of visual clarity, FOM and SNR. The performances of these algorithms show that they have an appreciable amount of potential in helping ophthalmologists detect the severity of eye-related diseases and prevent vision loss.
Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss
NASA Technical Reports Server (NTRS)
Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.
1981-01-01
Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.
Tan, Jin; Li, Rong; Jiang, Zi-Tao; Tang, Shu-Hua; Wang, Ying; Shi, Meng; Xiao, Yi-Qian; Jia, Bin; Lu, Tian-Xiang; Wang, Hao
2017-02-15
Synchronous front-face fluorescence spectroscopy has been developed for the discrimination of used frying oil (UFO) from edible vegetable oil (EVO), the estimation of the using time of UFO, and the determination of the adulteration of EVO with UFO. Both the heating time of laboratory prepared UFO and the adulteration of EVO with UFO could be determined by partial least squares regression (PLSR). To simulate the EVO adulteration with UFO, for each kind of oil, fifty adulterated samples at the adulterant amounts range of 1-50% were prepared. PLSR was then adopted to build the model and both full (leave-one-out) cross-validation and external validation were performed to evaluate the predictive ability. Under the optimum condition, the plots of observed versus predicted values exhibited high linearity (R(2)>0.96). The root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP) were both lower than 3%. Copyright © 2016 Elsevier Ltd. All rights reserved.
The effects of window shape and reticle presence on performance in a vertical alignment task
NASA Technical Reports Server (NTRS)
Rosenberg, Erika L.; Haines, Richard F.; Jordan, Kevin
1989-01-01
This study was conducted to evaluate the effect of selected interior work-station orientational cuing upon the ability to align a target image with local vertical in the frontal plane. Angular error from gravitational vertical in an alignment task was measured for 20 observers viewing through two window shapes (square, round), two initial orientations of a computer-generated space shuttle image, and the presence or absence of a stabilized optical alignment reticle. In terms of overall accuracy, it was found that observer error was significantly smaller for the square window and reticle-present conditions than for the round window and reticle-absent conditions. Response bias data reflected an overall tendency to undershoot and greater variability of response in the round window/no reticle condition. These results suggest that environmental cuing information, such as that provided by square window frames and alignment reticles, may aid in subjective orientation and increase accuracy of response in a Space Station proximity operations alignment task.
Kuriakose, Saji; Joe, I Hubert
2013-11-01
Determination of the authenticity of essential oils has become more significant, in recent years, following some illegal adulteration and contamination scandals. The present investigative study focuses on the application of near infrared spectroscopy to detect sample authenticity and quantify economic adulteration of sandalwood oils. Several data pre-treatments are investigated for calibration and prediction using partial least square regression (PLSR). The quantitative data analysis is done using a new spectral approach - full spectrum or sequential spectrum. The optimum number of PLS components is obtained according to the lowest root mean square error of calibration (RMSEC=0.00009% v/v). The lowest root mean square error of prediction (RMSEP=0.00016% v/v) in the test set and the highest coefficient of determination (R(2)=0.99989) are used as the evaluation tools for the best model. A nonlinear method, locally weighted regression (LWR), is added to extract nonlinear information and to compare with the linear PLSR model. Copyright © 2013 Elsevier B.V. All rights reserved.
Pellegrino Vidal, Rocío B; Allegrini, Franco; Olivieri, Alejandro C
2018-03-20
Multivariate curve resolution-alternating least-squares (MCR-ALS) is the model of choice when dealing with some non-trilinear arrays, specifically when the data are of chromatographic origin. To drive the iterative procedure to chemically interpretable solutions, the use of constraints becomes essential. In this work, both simulated and experimental data have been analyzed by MCR-ALS, applying chemically reasonable constraints, and investigating the relationship between selectivity, analytical sensitivity (γ) and root mean square error of prediction (RMSEP). As the selectivity in the instrumental modes decreases, the estimated values for γ did not fully represent the predictive model capabilities, judged from the obtained RMSEP values. Since the available sensitivity expressions have been developed by error propagation theory in unconstrained systems, there is a need of developing new expressions or analytical indicators. They should not only consider the specific profiles retrieved by MCR-ALS, but also the constraints under which the latter ones have been obtained. Copyright © 2017 Elsevier B.V. All rights reserved.
Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho
2018-07-15
Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuriakose, Saji; Joe, I. Hubert
2013-11-01
Determination of the authenticity of essential oils has become more significant, in recent years, following some illegal adulteration and contamination scandals. The present investigative study focuses on the application of near infrared spectroscopy to detect sample authenticity and quantify economic adulteration of sandalwood oils. Several data pre-treatments are investigated for calibration and prediction using partial least square regression (PLSR). The quantitative data analysis is done using a new spectral approach - full spectrum or sequential spectrum. The optimum number of PLS components is obtained according to the lowest root mean square error of calibration (RMSEC = 0.00009% v/v). The lowest root mean square error of prediction (RMSEP = 0.00016% v/v) in the test set and the highest coefficient of determination (R2 = 0.99989) are used as the evaluation tools for the best model. A nonlinear method, locally weighted regression (LWR), is added to extract nonlinear information and to compare with the linear PLSR model.
A New Test of Linear Hypotheses in OLS Regression under Heteroscedasticity of Unknown Form
ERIC Educational Resources Information Center
Cai, Li; Hayes, Andrew F.
2008-01-01
When the errors in an ordinary least squares (OLS) regression model are heteroscedastic, hypothesis tests involving the regression coefficients can have Type I error rates that are far from the nominal significance level. Asymptotically, this problem can be rectified with the use of a heteroscedasticity-consistent covariance matrix (HCCM)…
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
ERIC Educational Resources Information Center
Rocconi, Louis M.
2011-01-01
Hierarchical linear models (HLM) solve the problems associated with the unit of analysis problem such as misestimated standard errors, heterogeneity of regression and aggregation bias by modeling all levels of interest simultaneously. Hierarchical linear modeling resolves the problem of misestimated standard errors by incorporating a unique random…
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J; Hu, W; Xing, Y
Purpose: Different particle scanning beam delivery systems have different delivery accuracies. This study was performed to determine, for our particle treatment system, an appropriate composition (n=FWHM/GS) of spot size(FWHM) and grid size (GS), which can provide homogenous delivered dose distributions for both proton and heavy ion scanning beam radiotherapy. Methods: We analyzed the delivery errors of our beam delivery system using log files from the treatment of 28 patients. We used a homemade program to simulate square fields for different n values with and without considering the delivery errors and analyzed the homogeneity. All spots were located on a rectilinearmore » grid with equal spacing in the × and y directions. After that, we selected 7 energy levels for both proton and carbon ions. For each energy level, we made 6 square field plans with different n values (1, 1.5, 2, 2.5, 3, 3.5). Then we delivered those plans and used films to measure the homogeneity of each field. Results: For program simulation without delivery errors, when n≥1.1 the homogeneity can be within ±3%. For both proton and carbon program simulations with delivery errors and film measurements, the homogeneity can be within ±3% when n≥2.5. Conclusion: For our facility with system errors, the n≥2.5 is appropriate for maintaining homogeneity within ±3%.« less
Improving the Glucose Meter Error Grid With the Taguchi Loss Function.
Krouwer, Jan S
2016-07-01
Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.
Koch, Cosima; Posch, Andreas E; Goicoechea, Héctor C; Herwig, Christoph; Lendl, Bernhard
2014-01-07
This paper presents the quantification of Penicillin V and phenoxyacetic acid, a precursor, inline during Pencillium chrysogenum fermentations by FTIR spectroscopy and partial least squares (PLS) regression and multivariate curve resolution - alternating least squares (MCR-ALS). First, the applicability of an attenuated total reflection FTIR fiber optic probe was assessed offline by measuring standards of the analytes of interest and investigating matrix effects of the fermentation broth. Then measurements were performed inline during four fed-batch fermentations with online HPLC for the determination of Penicillin V and phenoxyacetic acid as reference analysis. PLS and MCR-ALS models were built using these data and validated by comparison of single analyte spectra with the selectivity ratio of the PLS models and the extracted spectral traces of the MCR-ALS models, respectively. The achieved root mean square errors of cross-validation for the PLS regressions were 0.22 g L(-1) for Penicillin V and 0.32 g L(-1) for phenoxyacetic acid and the root mean square errors of prediction for MCR-ALS were 0.23 g L(-1) for Penicillin V and 0.15 g L(-1) for phenoxyacetic acid. A general work-flow for building and assessing chemometric regression models for the quantification of multiple analytes in bioprocesses by FTIR spectroscopy is given. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Seyssel, Kevin; Suter, Michel; Pattou, François; Caiazzo, Robert; Verkindt, Helene; Raverdy, Violeta; Jolivet, Mathieu; Disse, Emmanuel; Robert, Maud; Giusti, Vittorio
2018-06-19
Different factors, such as age, gender, preoperative weight but also the patient's motivation, are known to impact outcomes after Roux-en-Y gastric bypass (RYGBP). Weight loss prediction is helpful to define realistic expectations and maintain motivation during follow-up, but also to select good candidates for surgery and limit failures. Therefore, developing a realistic predictive tool appears interesting. A Swiss cohort (n = 444), who underwent RYGBP, was used, with multiple linear regression models, to predict weight loss up to 60 months after surgery considering age, height, gender and weight at baseline. We then applied our model on two French cohorts and compared predicted weight to the one finally reached. Accuracy of our model was controlled using root mean square error (RMSE). Mean weight loss was 43.6 ± 13.0 and 40.8 ± 15.4 kg at 12 and 60 months respectively. The model was reliable to predict weight loss (0.37 < R 2 < 0.48) and RMSE between 5.0 and 12.2 kg. High preoperative weight and young age were positively correlated to weight loss, as well as male gender. Correlations between predicted weight and real weight were highly significant in both validation cohorts (R ≥ 0.7 and P < 0.01) and RMSE increased throughout follow-up between 6.2 and 15.4 kg. Our statistical model to predict weight loss outcomes after RYGBP seems accurate. It could be a valuable tool to define realistic weight loss expectations and to improve patient selection and outcomes during follow-up. Further research is needed to demonstrate the interest of this model in improving patients' motivation and results and limit the failures.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
Lankford, Christopher L; Does, Mark D
2018-02-01
Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Medium-range Performance of the Global NWP Model
NASA Astrophysics Data System (ADS)
Kim, J.; Jang, T.; Kim, J.; Kim, Y.
2017-12-01
The medium-range performance of the global numerical weather prediction (NWP) model in the Korea Meteorological Administration (KMA) is investigated. The performance is based on the prediction of the extratropical circulation. The mean square error is expressed by sum of spatial variance of discrepancy between forecasts and observations and the square of the mean error (ME). Thus, it is important to investigate the ME effect in order to understand the model performance. The ME is expressed by the subtraction of an anomaly from forecast difference against the real climatology. It is found that the global model suffers from a severe systematic ME in medium-range forecasts. The systematic ME is dominant in the entire troposphere in all months. Such ME can explain at most 25% of root mean square error. We also compare the extratropical ME distribution with that from other NWP centers. NWP models exhibit similar spatial ME structure each other. It is found that the spatial ME pattern is highly correlated to that of an anomaly, implying that the ME varies with seasons. For example, the correlation coefficient between ME and anomaly ranges from -0.51 to -0.85 by months. The pattern of the extratropical circulation also has a high correlation to an anomaly. The global model has trouble in faithfully simulating extratropical cyclones and blockings in the medium-range forecast. In particular, the model has a hard to simulate an anomalous event in medium-range forecasts. If we choose an anomalous period for a test-bed experiment, we will suffer from a large error due to an anomaly.
Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence
2018-01-01
The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.
NASA Astrophysics Data System (ADS)
Yamada, Y.; Shimokawa, T.; Shinomoto, S. Yano, T.; Gouda, N.
2009-09-01
For the purpose of determining the celestial coordinates of stellar positions, consecutive observational images are laid overlapping each other with clues of stars belonging to multiple plates. In the analysis, one has to estimate not only the coordinates of individual plates, but also the possible expansion and distortion of the frame. This problem reduces to a least-squares fit that can in principle be solved by a huge matrix inversion, which is, however, impracticable. Here, we propose using Kalman filtering to perform the least-squares fit and implement a practical iterative algorithm. We also estimate errors associated with this iterative method and suggest a design of overlapping plates to minimize the error.
Adaptive control strategies for flexible robotic arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity if not unstable closed-loop behavior. Therefore a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
A study of image quality for radar image processing. [synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
1982-01-01
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
Maritime Adaptive Optics Beam Control
2010-09-01
Liquid Crystal LMS Least Mean Square MIMO Multiple- Input Multiple-Output MMDM Micromachined Membrane Deformable Mirror MSE Mean Square Error...determine how the beam is distorted, a control computer to calculate the correction to be applied, and a corrective element, usually a deformable mirror ...during this research, an overview of the system modification is provided here. Using additional mirrors and reflecting the beam to and from an
The microcomputer scientific software series 3: general linear model--analysis of variance.
Harold M. Rauscher
1985-01-01
A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...
Selection of lasing direction in single mode semiconductor square ring cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jin-Woong; Kim, Kyoung-Youm; Moon, Hee-Jong
We propose and demonstrate a selection scheme of lasing direction by imposing a loss imbalance structure into the single mode square ring cavity. The control of the traveling direction is realized by introducing a taper-step section in one of the straight waveguides of the square ring cavity. It was shown by semi-analytic calculation that the taper-step section in the cavity provides effective loss imbalance between two travelling directions as the round trip repeats. Various kinds of square cavities were fabricated using InGaAsP/InGaAs multiple quantum well semiconductor materials in order to test the direction selectivity while maintaining the single mode. Wemore » also measured the pump power dependent lasing spectra to investigate the maintenance property of the lasing direction. The experimental results demonstrated that the proposed scheme is an efficient means for a unidirectional lasing in a single mode laser.« less
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1979-01-01
Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.
Precise calibration of spatial phase response nonuniformity arising in liquid crystal on silicon.
Xu, Jingquan; Qin, SiYi; Liu, Chen; Fu, Songnian; Liu, Deming
2018-06-15
In order to calibrate the spatial phase response nonuniformity of liquid crystal on silicon (LCoS), we propose to use a Twyman-Green interferometer to characterize the wavefront distortion, due to the inherent curvature of the device. During the characterization, both the residual carrier frequency introduced by the Fourier transform evaluation method and the lens aberration are error sources. For the tilted phase error introduced by residual carrier frequency, the least mean square fitting method is used to obtain the tilted phase error. Meanwhile, we use Zernike polynomials fitting based on plane mirror calibration to mitigate the lens aberration. For a typical LCoS with 1×12,288 pixels after calibration, the peak-to-valley value of the inherent wavefront distortion is approximately 0.25λ at 1550 nm, leading to a half-suppression of wavefront distortion. All efforts can suppress the root mean squares value of the inherent wavefront distortion to approximately λ/34.
NASA Astrophysics Data System (ADS)
Sun, Li-wei; Ye, Xin; Fang, Wei; He, Zhen-lei; Yi, Xiao-long; Wang, Yu-peng
2017-11-01
Hyper-spectral imaging spectrometer has high spatial and spectral resolution. Its radiometric calibration needs the knowledge of the sources used with high spectral resolution. In order to satisfy the requirement of source, an on-orbit radiometric calibration method is designed in this paper. This chain is based on the spectral inversion accuracy of the calibration light source. We compile the genetic algorithm progress which is used to optimize the channel design of the transfer radiometer and consider the degradation of the halogen lamp, thus realizing the high accuracy inversion of spectral curve in the whole working time. The experimental results show the average root mean squared error is 0.396%, the maximum root mean squared error is 0.448%, and the relative errors at all wavelengths are within 1% in the spectral range from 500 nm to 900 nm during 100 h operating time. The design lays a foundation for the high accuracy calibration of imaging spectrometer.
Research on the infiltration processes of lawn soils of the Babao River in the Qilian Mountain.
Li, GuangWen; Feng, Qi; Zhang, FuPing; Cheng, AiFang
2014-01-01
Using a Guelph Permeameter, the soil water infiltration processes were analyzed in the Babao River of the Qilian Mountain in China. The results showed that the average soil initial infiltration and the steady infiltration rates in the upstream reaches of the Babao River are 1.93 and 0.99 cm/min, whereas those of the middle area are 0.48 cm/min and 0.21 cm/min, respectively. The infiltration processes can be divided into three stages: the rapidly changing stage (0-10 min), the slowly changing stage (10-30 min) and the stabilization stage (after 30 min). We used field data collected from lawn soils and evaluated the performances of the infiltration models of Philip, Kostiakov and Horton with the sum of squared error, the root mean square error, the coefficient of determination, the mean error, the model efficiency and Willmott's index of agreement. The results indicated that the Kostiakov model was most suitable for studying the infiltration process in the alpine lawn soils.
Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.
Minin, Serge; Kamalabadi, Farzad
2009-12-20
We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.
Performance metrics for the assessment of satellite data products: an ocean color case study
Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy
2018-01-01
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296
Zhao, Haiquan; Zhang, Jiashu
2009-04-01
This paper proposes a novel computational efficient adaptive nonlinear equalizer based on combination of finite impulse response (FIR) filter and functional link artificial neural network (CFFLANN) to compensate linear and nonlinear distortions in nonlinear communication channel. This convex nonlinear combination results in improving the speed while retaining the lower steady-state error. In addition, since the CFFLANN needs not the hidden layers, which exist in conventional neural-network-based equalizers, it exhibits a simpler structure than the traditional neural networks (NNs) and can require less computational burden during the training mode. Moreover, appropriate adaptation algorithm for the proposed equalizer is derived by the modified least mean square (MLMS). Results obtained from the simulations clearly show that the proposed equalizer using the MLMS algorithm can availably eliminate various intensity linear and nonlinear distortions, and be provided with better anti-jamming performance. Furthermore, comparisons of the mean squared error (MSE), the bit error rate (BER), and the effect of eigenvalue ratio (EVR) of input correlation matrix are presented.
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
Performance Metrics, Error Modeling, and Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling
2016-01-01
A common set of statistical metrics has been used to summarize the performance of models or measurements- the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying uncertainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling methodology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.
The cost of misremembering: Inferring the loss function in visual working memory.
Sims, Chris R
2015-03-04
Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang
2016-03-01
An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.
[Can the scattering of differences from the target refraction be avoided?].
Janknecht, P
2008-10-01
We wanted to check how the stochastic error is affected by two lens formulae. The power of the intraocular lens was calculated using the SRK-II formula and the Haigis formula after eye length measurement with ultrasound and the IOL Master. Both lens formulae were partially derived and Gauss error analysis was used for examination of the propagated error. 61 patients with a mean age of 73.8 years were analysed. The postoperative refraction differed from the calculated refraction after ultrasound biometry using the SRK-II formula by 0.05 D (-1.56 to + 1.31, S. D.: 0.59 D; 92 % within +/- 1.0 D), after IOL Master biometry using the SRK-II formula by -0.15 D (-1.18 to + 1.25, S. D.: 0.52 D; 97 % within +/- 1.0 D), and after IOL Master biometry using the Haigis formula by -0.11 D (-1.14 to + 1.14, S. D.: 0.48 D; 95 % within +/- 1.0 D). The results did not differ from one another. The propagated error of the Haigis formula can be calculated according to DeltaP = square root (deltaL x (-4.206))(2) + (deltaVK x 0.9496)(2) + (DeltaDC x (-1.4950))(2). (DeltaL: error measuring axial length, DeltaVK error measuring anterior chamber depth, DeltaDC error measuring corneal power), the propagated error of the SRK-II formula according to DeltaP = square root (DeltaL x (-2.5))(2) + (DeltaDC x (-0.9))(2). The propagated error of the Haigis formula is always larger than the propagated error of the SRK-II formula. Scattering of the postoperative difference from the expected refraction cannot be avoided completely. It is possible to limit the systematic error by developing complicated formulae like the Haigis formula. However, increasing the number of parameters which need to be measured increases the dispersion of the calculated postoperative refraction. A compromise has to be found, and therefore the SRK-II formula is not outdated.
Tamburini, Elena; Mamolini, Elisabetta; De Bastiani, Morena; Marchetti, Maria Gabriella
2016-07-15
Fusarium proliferatum is considered to be a pathogen of many economically important plants, including garlic. The objective of this research was to apply near-infrared spectroscopy (NIRS) to rapidly determine fungal concentration in intact garlic cloves, avoiding the laborious and time-consuming procedures of traditional assays. Preventive detection of infection before seeding is of great interest for farmers, because it could avoid serious losses of yield during harvesting and storage. Spectra were collected on 95 garlic cloves, divided in five classes of infection (from 1-healthy to 5-very highly infected) in the range of fungal concentration 0.34-7231.15 ppb. Calibration and cross validation models were developed with partial least squares regression (PLSR) on pretreated spectra (standard normal variate, SNV, and derivatives), providing good accuracy in prediction, with a coefficient of determination (R²) of 0.829 and 0.774, respectively, a standard error of calibration (SEC) of 615.17 ppb, and a standard error of cross validation (SECV) of 717.41 ppb. The calibration model was then used to predict fungal concentration in unknown samples, peeled and unpeeled. The results showed that NIRS could be used as a reliable tool to directly detect and quantify F. proliferatum infection in peeled intact garlic cloves, but the presence of the external peel strongly affected the prediction reliability.
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
First-Order System Least-Squares for Second-Order Elliptic Problems with Discontinuous Coefficients
NASA Technical Reports Server (NTRS)
Manteuffel, Thomas A.; McCormick, Stephen F.; Starke, Gerhard
1996-01-01
The first-order system least-squares methodology represents an alternative to standard mixed finite element methods. Among its advantages is the fact that the finite element spaces approximating the pressure and flux variables are not restricted by the inf-sup condition and that the least-squares functional itself serves as an appropriate error measure. This paper studies the first-order system least-squares approach for scalar second-order elliptic boundary value problems with discontinuous coefficients. Ellipticity of an appropriately scaled least-squares bilinear form of the size of the jumps in the coefficients leading to adequate finite element approximation results. The occurrence of singularities at interface corners and cross-points is discussed. and a weighted least-squares functional is introduced to handle such cases. Numerical experiments are presented for two test problems to illustrate the performance of this approach.
Wang, Pengju; Li, Xiaozhu; Wang, Jiwei; Gao, Dongling; Li, Yuenan; Li, Haoze; Chu, Yongchao; Zhang, Zhongxian; Liu, Hongtao; Jiang, Guozhong; Cheng, Zhenguo; Wang, Shengdian; Dong, Jianzeng; Feng, Baisui; Chard, Louisa S; Lemoine, Nicholas R; Wang, Yaohe
2018-01-10
The originally published version of this Article contained errors in Figure 4. In panel b, the square and diamond labels associated with the uppermost survival curve were incorrectly displayed as 'n' and 'u', respectively. These errors have now been corrected in both the PDF and HTML versions of the Article.
{lambda} elements for singular problems in CFD: Viscoelastic fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.K.; Surana, K.S.
1996-10-01
This paper presents two dimensional {lambda} element formulation for viscoelastic fluid flow containing point singularities in the flow field. The flow of viscoelastic fluid even without singularities are a difficult class of problems for increasing Deborah number or Weissenburg number due to increased dominance of convective terms and thus increased hyperbolicity. In the present work the equations of fluid motion and the constitutive laws are recast in the form of a first order system of coupled equations with the use of auxiliary variables. The velocity, pressure and stresses are interpolated using equal order C{sup 0} {lambda} element approximations. The Leastmore » Squares Finite Element Method (LSFEM) is used to construct the integral form (error functional I) corresponding to these equations. The error functional is constructed by taking the integrated sum of the squares of the errors or residuals (over the whole discretization) resulting when the element approximation is substituted into these equations. The conditions resulting from the minimization of the error functional are satisfied by using Newton`s method with line search. LSFEM has much superior performance when dealing with non-linear and convection dominated problems.« less
Controlled sound field with a dual layer loudspeaker array
NASA Astrophysics Data System (ADS)
Shin, Mincheol; Fazi, Filippo M.; Nelson, Philip A.; Hirono, Fabio C.
2014-08-01
Controlled sound interference has been extensively investigated using a prototype dual layer loudspeaker array comprised of 16 loudspeakers. Results are presented for measures of array performance such as input signal power, directivity of sound radiation and accuracy of sound reproduction resulting from the application of conventional control methods such as minimization of error in mean squared pressure, maximization of energy difference and minimization of weighted pressure error and energy. Procedures for selecting the tuning parameters have also been introduced. With these conventional concepts aimed at the production of acoustically bright and dark zones, all the control methods used require a trade-off between radiation directivity and reproduction accuracy in the bright zone. An alternative solution is proposed which can achieve better performance based on the measures presented simultaneously by inserting a low priority zone named as the “gray” zone. This involves the weighted minimization of mean-squared errors in both bright and dark zones together with the gray zone in which the minimization error is given less importance. This results in the production of directional bright zone in which the accuracy of sound reproduction is maintained with less required input power. The results of simulations and experiments are shown to be in excellent agreement.
Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun
2016-07-01
The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.
Steiger-Ronay, Valerie; Tektas, Sibel; Attin, Thomas; Lussi, Adrian; Becker, Klaus; Wiedemeier, Daniel B; Beyeler, Barbara; Carvalho, Thiago S
2018-06-07
The aim of this in vitro study was to investigate the impact of saliva on the abrasion of eroded enamel using two measuring methods. A total of 80 bovine enamel specimens from 20 bovine incisors were allocated to four experimental groups (n = 20 specimens per group). After baseline surface microhardness (SMH) measurements and profilometry all specimens were subjected to erosion (2 min, 1% citric acid, pH: 3.6, 37°C). SMH was determined again, and the depths of the Knoop indentations were calculated. Thereafter, specimens were incubated in human saliva (group 1 - no incubation/control, group 2 - 0.5 h, group 3 - 1 h, group 4 - 2 h) before toothbrush abrasion was performed. After final SMH measurements and profilometry, indentations were remeasured, and surface loss was calculated. SMH did not return to baseline values regardless of the length of saliva incubation. Further, an irreversible substance loss was observed for all specimens. With the indentation method, significantly (p < 0.05) more substance loss was found for controls (least square means ± standard error of 198 ± 19 nm) than for groups 2-4 (110 ± 10, 114 ± 11, and 105 ± 14 nm). Profilometric assessment showed significantly more substance loss for controls (122 ± 8 nm) than for group 4 (106 ± 5 nm). Intraclass correlation for interrater reliability between measurement methods was low (0.21, CI: 0.1-0.3), indicating poor agreement. Exposure of eroded enamel to saliva for up to 2 h could not re-establish the original SMH. The amount of measured substance loss depended on the measurement method applied. © 2018 S. Karger AG, Basel.
Lim, Magdalene Yeok Leng; Loo, Jenny Hooi Yin
2018-07-01
To determine if there is an association between hearing loss and poorer cognitive scores on Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) and to determine if poor hearing acuity affects scoring on the cognitive screening tests of MMSE and MoCA. One hundred fourteen elderly patients (Singapore residents) aged between 55 and 86 years were sampled. Participants completed a brief history questionnaire, pure tone audiometry, and 2 cognitive screening tests-the MMSE and MoCA. Average hearing thresholds of the better ear in the frequencies of 0.5, 1, 2, and 4 kHz were used for data analysis. Hearing loss was significantly associated with poorer cognitive scores in Poisson regression models adjusted for age. Mini-Mental State Examination scores were shown to decrease by 2.8% (P = .029), and MoCA scores by 3.5% (P = .013) for every 10 dB of hearing loss. Analysis of hearing-sensitive components of "Registration" and "Recall" in MMSE and MoCA using chi-square tests showed significantly poorer performance in the hearing loss group as compared to the normal hearing group. Phonetic analysis of target words with high error rates shows that the poor performance was likely contributed by decreased hearing acuity, on top of a possible true deficit in cognition in the hearing impaired. Hearing loss is associated with poorer cognitive scores on MMSE and MoCA, and cognitive scoring is likely confounded by poor hearing ability. This highlights an important, often overlooked aspect of sensory impairment during cognitive screening. Provisions should be made when testing for cognition in the hearing-impaired population to avoid over-referral and subsequent misdiagnoses of cognitive impairment. Copyright © 2018 John Wiley & Sons, Ltd.
Quantifying soil carbon loss and uncertainty from a peatland wildfire using multi-temporal LiDAR
Reddy, Ashwan D.; Hawbaker, Todd J.; Wurster, F.; Zhu, Zhiliang; Ward, S.; Newcomb, Doug; Murray, R.
2015-01-01
Peatlands are a major reservoir of global soil carbon, yet account for just 3% of global land cover. Human impacts like draining can hinder the ability of peatlands to sequester carbon and expose their soils to fire under dry conditions. Estimating soil carbon loss from peat fires can be challenging due to uncertainty about pre-fire surface elevations. This study uses multi-temporal LiDAR to obtain pre- and post-fire elevations and estimate soil carbon loss caused by the 2011 Lateral West fire in the Great Dismal Swamp National Wildlife Refuge, VA, USA. We also determine how LiDAR elevation error affects uncertainty in our carbon loss estimate by randomly perturbing the LiDAR point elevations and recalculating elevation change and carbon loss, iterating this process 1000 times. We calculated a total loss using LiDAR of 1.10 Tg C across the 25 km2 burned area. The fire burned an average of 47 cm deep, equivalent to 44 kg C/m2, a value larger than the 1997 Indonesian peat fires (29 kg C/m2). Carbon loss via the First-Order Fire Effects Model (FOFEM) was estimated to be 0.06 Tg C. Propagating the LiDAR elevation error to the carbon loss estimates, we calculated a standard deviation of 0.00009 Tg C, equivalent to 0.008% of total carbon loss. We conclude that LiDAR elevation error is not a significant contributor to uncertainty in soil carbon loss under severe fire conditions with substantial peat consumption. However, uncertainties may be more substantial when soil elevation loss is of a similar or smaller magnitude than the reported LiDAR error.
NASA Astrophysics Data System (ADS)
Okamoto, R. J.; Clayton, E. H.; Bayly, P. V.
2011-10-01
Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G'' (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.
Okamoto, R J; Clayton, E H; Bayly, P V
2011-10-07
Magnetic resonance elastography (MRE) is used to quantify the viscoelastic shear modulus, G*, of human and animal tissues. Previously, values of G* determined by MRE have been compared to values from mechanical tests performed at lower frequencies. In this study, a novel dynamic shear test (DST) was used to measure G* of a tissue-mimicking material at higher frequencies for direct comparison to MRE. A closed-form solution, including inertial effects, was used to extract G* values from DST data obtained between 20 and 200 Hz. MRE was performed using cylindrical 'phantoms' of the same material in an overlapping frequency range of 100-400 Hz. Axial vibrations of a central rod caused radially propagating shear waves in the phantom. Displacement fields were fit to a viscoelastic form of Navier's equation using a total least-squares approach to obtain local estimates of G*. DST estimates of the storage G' (Re[G*]) and loss modulus G″ (Im[G*]) for the tissue-mimicking material increased with frequency from 0.86 to 0.97 kPa (20-200 Hz, n = 16), while MRE estimates of G' increased from 1.06 to 1.15 kPa (100-400 Hz, n = 6). The loss factor (Im[G*]/Re[G*]) also increased with frequency for both test methods: 0.06-0.14 (20-200 Hz, DST) and 0.11-0.23 (100-400 Hz, MRE). Close agreement between MRE and DST results at overlapping frequencies indicates that G* can be locally estimated with MRE over a wide frequency range. Low signal-to-noise ratio, long shear wavelengths and boundary effects were found to increase residual fitting error, reinforcing the use of an error metric to assess confidence in local parameter estimates obtained by MRE.
NASA Astrophysics Data System (ADS)
Gao, Jing; Burt, James E.
2017-12-01
This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
NASA Astrophysics Data System (ADS)
Pei, Yong; Modestino, James W.
2004-12-01
Digital video delivered over wired-to-wireless networks is expected to suffer quality degradation from both packet loss and bit errors in the payload. In this paper, the quality degradation due to packet loss and bit errors in the payload are quantitatively evaluated and their effects are assessed. We propose the use of a concatenated forward error correction (FEC) coding scheme employing Reed-Solomon (RS) codes and rate-compatible punctured convolutional (RCPC) codes to protect the video data from packet loss and bit errors, respectively. Furthermore, the performance of a joint source-channel coding (JSCC) approach employing this concatenated FEC coding scheme for video transmission is studied. Finally, we describe an improved end-to-end architecture using an edge proxy in a mobile support station to implement differential error protection for the corresponding channel impairments expected on the two networks. Results indicate that with an appropriate JSCC approach and the use of an edge proxy, FEC-based error-control techniques together with passive error-recovery techniques can significantly improve the effective video throughput and lead to acceptable video delivery quality over time-varying heterogeneous wired-to-wireless IP networks.
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
Wood, Clive; Alwati, Abdolati; Halsey, Sheelagh; Gough, Tim; Brown, Elaine; Kelly, Adrian; Paradkar, Anant
2016-09-10
The use of near infra red spectroscopy to predict the concentration of two pharmaceutical co-crystals; 1:1 ibuprofen-nicotinamide (IBU-NIC) and 1:1 carbamazepine-nicotinamide (CBZ-NIC) has been evaluated. A partial least squares (PLS) regression model was developed for both co-crystal pairs using sets of standard samples to create calibration and validation data sets with which to build and validate the models. Parameters such as the root mean square error of calibration (RMSEC), root mean square error of prediction (RMSEP) and correlation coefficient were used to assess the accuracy and linearity of the models. Accurate PLS regression models were created for both co-crystal pairs which can be used to predict the co-crystal concentration in a powder mixture of the co-crystal and the active pharmaceutical ingredient (API). The IBU-NIC model had smaller errors than the CBZ-NIC model, possibly due to the complex CBZ-NIC spectra which could reflect the different arrangement of hydrogen bonding associated with the co-crystal compared to the IBU-NIC co-crystal. These results suggest that NIR spectroscopy can be used as a PAT tool during a variety of pharmaceutical co-crystal manufacturing methods and the presented data will facilitate future offline and in-line NIR studies involving pharmaceutical co-crystals. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Quantitative transmission Raman spectroscopy of pharmaceutical tablets and capsules.
Johansson, Jonas; Sparén, Anders; Svensson, Olof; Folestad, Staffan; Claybourn, Mike
2007-11-01
Quantitative analysis of pharmaceutical formulations using the new approach of transmission Raman spectroscopy has been investigated. For comparison, measurements were also made in conventional backscatter mode. The experimental setup consisted of a Raman probe-based spectrometer with 785 nm excitation for measurements in backscatter mode. In transmission mode the same system was used to detect the Raman scattered light, while an external diode laser of the same type was used as excitation source. Quantitative partial least squares models were developed for both measurement modes. The results for tablets show that the prediction error for an independent test set was lower for the transmission measurements with a relative root mean square error of about 2.2% as compared with 2.9% for the backscatter mode. Furthermore, the models were simpler in the transmission case, for which only a single partial least squares (PLS) component was required to explain the variation. The main reason for the improvement using the transmission mode is a more representative sampling of the tablets compared with the backscatter mode. Capsules containing mixtures of pharmaceutical powders were also assessed by transmission only. The quantitative results for the capsules' contents were good, with a prediction error of 3.6% w/w for an independent test set. The advantage of transmission Raman over backscatter Raman spectroscopy has been demonstrated for quantitative analysis of pharmaceutical formulations, and the prospects for reliable, lean calibrations for pharmaceutical analysis is discussed.
Wang, L; Qin, X C; Lin, H C; Deng, K F; Luo, Y W; Sun, Q R; Du, Q X; Wang, Z Y; Tuo, Y; Sun, J H
2018-02-01
To analyse the relationship between Fourier transform infrared (FTIR) spectrum of rat's spleen tissue and postmortem interval (PMI) for PMI estimation using FTIR spectroscopy combined with data mining method. Rats were sacrificed by cervical dislocation, and the cadavers were placed at 20 ℃. The FTIR spectrum data of rats' spleen tissues were taken and measured at different time points. After pretreatment, the data was analysed by data mining method. The absorption peak intensity of rat's spleen tissue spectrum changed with the PMI, while the absorption peak position was unchanged. The results of principal component analysis (PCA) showed that the cumulative contribution rate of the first three principal components was 96%. There was an obvious clustering tendency for the spectrum sample at each time point. The methods of partial least squares discriminant analysis (PLS-DA) and support vector machine classification (SVMC) effectively divided the spectrum samples with different PMI into four categories (0-24 h, 48-72 h, 96-120 h and 144-168 h). The determination coefficient ( R ²) of the PMI estimation model established by PLS regression analysis was 0.96, and the root mean square error of calibration (RMSEC) and root mean square error of cross validation (RMSECV) were 9.90 h and 11.39 h respectively. In prediction set, the R ² was 0.97, and the root mean square error of prediction (RMSEP) was 10.49 h. The FTIR spectrum of the rat's spleen tissue can be effectively analyzed qualitatively and quantitatively by the combination of FTIR spectroscopy and data mining method, and the classification and PLS regression models can be established for PMI estimation. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Analysis of Learning Curve Fitting Techniques.
1987-09-01
1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied
First-Order System Least-Squares for the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Bochev, P.; Cai, Z.; Manteuffel, T. A.; McCormick, S. F.
1996-01-01
This paper develops a least-squares approach to the solution of the incompressible Navier-Stokes equations in primitive variables. As with our earlier work on Stokes equations, we recast the Navier-Stokes equations as a first-order system by introducing a velocity flux variable and associated curl and trace equations. We show that the resulting system is well-posed, and that an associated least-squares principle yields optimal discretization error estimates in the H(sup 1) norm in each variable (including the velocity flux) and optimal multigrid convergence estimates for the resulting algebraic system.
Mirjankar, Nikhil S; Fraga, Carlos G; Carman, April J; Moran, James J
2016-02-02
Chemical attribution signatures (CAS) for chemical threat agents (CTAs), such as cyanides, are being investigated to provide an evidentiary link between CTAs and specific sources to support criminal investigations and prosecutions. Herein, stocks of KCN and NaCN were analyzed for trace anions by high performance ion chromatography (HPIC), carbon stable isotope ratio (δ(13)C) by isotope ratio mass spectrometry (IRMS), and trace elements by inductively coupled plasma optical emission spectroscopy (ICP-OES). The collected analytical data were evaluated using hierarchical cluster analysis (HCA), Fisher-ratio (F-ratio), interval partial least-squares (iPLS), genetic algorithm-based partial least-squares (GAPLS), partial least-squares discriminant analysis (PLSDA), K nearest neighbors (KNN), and support vector machines discriminant analysis (SVMDA). HCA of anion impurity profiles from multiple cyanide stocks from six reported countries of origin resulted in cyanide samples clustering into three groups, independent of the associated alkali metal (K or Na). The three groups were independently corroborated by HCA of cyanide elemental profiles and corresponded to countries each having one known solid cyanide factory: Czech Republic, Germany, and United States. Carbon stable isotope measurements resulted in two clusters: Germany and United States (the single Czech stock grouped with United States stocks). Classification errors for two validation studies using anion impurity profiles collected over five years on different instruments were as low as zero for KNN and SVMDA, demonstrating the excellent reliability associated with using anion impurities for matching a cyanide sample to its factory using our current cyanide stocks. Variable selection methods reduced errors for those classification methods having errors greater than zero; iPLS-forward selection and F-ratio typically provided the lowest errors. Finally, using anion profiles to classify cyanides to a specific stock or stock group for a subset of United States stocks resulted in cross-validation errors ranging from 0 to 5.3%.
NASA Astrophysics Data System (ADS)
Wang, Yan-Jun; Liu, Qun
1999-03-01
Analysis of stock-recruitment (SR) data is most often done by fitting various SR relationship curves to the data. Fish population dynamics data often have stochastic variations and measurement errors, which usually result in a biased regression analysis. This paper presents a robust regression method, least median of squared orthogonal distance (LMD), which is insensitive to abnormal values in the dependent and independent variables in a regression analysis. Outliers that have significantly different variance from the rest of the data can be identified in a residual analysis. Then, the least squares (LS) method is applied to the SR data with defined outliers being down weighted. The application of LMD and LMD-based Reweighted Least Squares (RLS) method to simulated and real fisheries SR data is explored.
Network Adjustment of Orbit Errors in SAR Interferometry
NASA Astrophysics Data System (ADS)
Bahr, Hermann; Hanssen, Ramon
2010-03-01
Orbit errors can induce significant long wavelength error signals in synthetic aperture radar (SAR) interferograms and thus bias estimates of wide-scale deformation phenomena. The presented approach aims for correcting orbit errors in a preprocessing step to deformation analysis by modifying state vectors. Whereas absolute errors in the orbital trajectory are negligible, the influence of relative errors (baseline errors) is parametrised by their parallel and perpendicular component as a linear function of time. As the sensitivity of the interferometric phase is only significant with respect to the perpendicular base-line and the rate of change of the parallel baseline, the algorithm focuses on estimating updates to these two parameters. This is achieved by a least squares approach, where the unwrapped residual interferometric phase is observed and atmospheric contributions are considered to be stochastic with constant mean. To enhance reliability, baseline errors are adjusted in an overdetermined network of interferograms, yielding individual orbit corrections per acquisition.
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Evrendilek, Fatih
2007-12-12
This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.
Neural self-tuning adaptive control of non-minimum phase system
NASA Technical Reports Server (NTRS)
Ho, Long T.; Bialasiewicz, Jan T.; Ho, Hai T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity, if not unstable, closed-loop behavior. Therefore, a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
On the robustness of a Bayes estimate. [in reliability theory
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1974-01-01
This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.
Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators
NASA Astrophysics Data System (ADS)
Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.
2018-03-01
We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.
Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Warren G.; Jirasek, Andrew, E-mail: jirasek@uvic.ca; Wells, Derek M.
2014-11-01
Purpose: The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. Methods: A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm{sup 2} square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. Tomore » address structured errors, an iterative Savitzky–Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. Results: In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. Conclusions: This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.« less
Radiation-induced refraction artifacts in the optical CT readout of polymer gel dosimeters.
Campbell, Warren G; Wells, Derek M; Jirasek, Andrew
2014-11-01
The objective of this work is to demonstrate imaging artifacts that can occur during the optical computed tomography (CT) scanning of polymer gel dosimeters due to radiation-induced refractive index (RI) changes in polyacrylamide gels. A 1 L cylindrical polyacrylamide gel dosimeter was irradiated with 3 × 3 cm(2) square beams of 6 MV photons. A prototype fan-beam optical CT scanner was used to image the dosimeter. Investigative optical CT scans were performed to examine two types of rayline bending: (i) bending within the plane of the fan-beam and (ii) bending out the plane of the fan-beam. To address structured errors, an iterative Savitzky-Golay (ISG) filtering routine was designed to filter 2D projections in sinogram space. For comparison, 2D projections were alternatively filtered using an adaptive-mean (AM) filter. In-plane rayline bending was most notably observed in optical CT projections where rays of the fan-beam confronted a sustained dose gradient that was perpendicular to their trajectory but within the fan-beam plane. These errors caused distinct streaking artifacts in image reconstructions due to the refraction of higher intensity rays toward more opaque regions of the dosimeter. Out-of-plane rayline bending was observed in slices of the dosimeter that featured dose gradients perpendicular to the plane of the fan-beam. These errors caused widespread, severe overestimations of dose in image reconstructions due to the higher-than-actual opacity that is perceived by the scanner when light is bent off of the detector array. The ISG filtering routine outperformed AM filtering for both in-plane and out-of-plane rayline errors caused by radiation-induced RI changes. For in-plane rayline errors, streaks in an irradiated region (>7 Gy) were as high as 49% for unfiltered data, 14% for AM, and 6% for ISG. For out-of-plane rayline errors, overestimations of dose in a low-dose region (∼50 cGy) were as high as 13 Gy for unfiltered data, 10 Gy for AM, and 3.1 Gy for ISG. The ISG routine also addressed unrelated artifacts that previously needed to be manually removed in sinogram space. However, the ISG routine blurred reconstructions, causing losses in spatial resolution of ∼5 mm in the plane of the fan-beam and ∼8 mm perpendicular to the fan-beam. This paper reveals a new category of imaging artifacts that can affect the optical CT readout of polyacrylamide gel dosimeters. Investigative scans show that radiation-induced RI changes can cause significant rayline errors when rays confront a prolonged dose gradient that runs perpendicular to their trajectory. In fan-beam optical CT, these errors manifested in two ways: (1) distinct streaking artifacts caused by in-plane rayline bending and (2) severe overestimations of opacity caused by rays bending out of the fan-beam plane and missing the detector array. Although the ISG filtering routine mitigated these errors better than an adaptive-mean filtering routine, it caused unacceptable losses in spatial resolution.
NASA Astrophysics Data System (ADS)
Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.
2012-08-01
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.
Four Types of Pulse Oximeters Accurately Detect Hypoxia during Low Perfusion and Motion.
Louie, Aaron; Feiner, John R; Bickler, Philip E; Rhodes, Laura; Bernstein, Michael; Lucero, Jennifer
2018-03-01
Pulse oximeter performance is degraded by motion artifacts and low perfusion. Manufacturers developed algorithms to improve instrument performance during these challenges. There have been no independent comparisons of these devices. We evaluated the performance of four pulse oximeters (Masimo Radical-7, USA; Nihon Kohden OxyPal Neo, Japan; Nellcor N-600, USA; and Philips Intellivue MP5, USA) in 10 healthy adult volunteers. Three motions were evaluated: tapping, pseudorandom, and volunteer-generated rubbing, adjusted to produce photoplethsmogram disturbance similar to arterial pulsation amplitude. During motion, inspired gases were adjusted to achieve stable target plateaus of arterial oxygen saturation (SaO2) at 75%, 88%, and 100%. Pulse oximeter readings were compared with simultaneous arterial blood samples to calculate bias (oxygen saturation measured by pulse oximetry [SpO2] - SaO2), mean, SD, 95% limits of agreement, and root mean square error. Receiver operating characteristic curves were determined to detect mild (SaO2 < 90%) and severe (SaO2 < 80%) hypoxemia. Pulse oximeter readings corresponding to 190 blood samples were analyzed. All oximeters detected hypoxia but motion and low perfusion degraded performance. Three of four oximeters (Masimo, Nellcor, and Philips) had root mean square error greater than 3% for SaO2 70 to 100% during any motion, compared to a root mean square error of 1.8% for the stationary control. A low perfusion index increased error. All oximeters detected hypoxemia during motion and low-perfusion conditions, but motion impaired performance at all ranges, with less accuracy at lower SaO2. Lower perfusion degraded performance in all but the Nihon Kohden instrument. We conclude that different types of pulse oximeters can be similarly effective in preserving sensitivity to clinically relevant hypoxia.
Estimation of water level and steam temperature using ensemble Kalman filter square root (EnKF-SR)
NASA Astrophysics Data System (ADS)
Herlambang, T.; Mufarrikoh, Z.; Karya, D. F.; Rahmalia, D.
2018-04-01
The equipment unit which has the most vital role in the steam-powered electric power plant is boiler. Steam drum boiler is a tank functioning to separate fluida into has phase and liquid phase. The existence in boiler system has a vital role. The controlled variables in the steam drum boiler are water level and the steam temperature. If the water level is higher than the determined level, then the gas phase resulted will contain steam endangering the following process and making the resulted steam going to turbine get less, and the by causing damages to pipes in the boiler. On the contrary, if less than the height of determined water level, the resulted height will result in dry steam likely to endanger steam drum. Thus an error was observed between the determined. This paper studied the implementation of the Ensemble Kalman Filter Square Root (EnKF-SR) method in nonlinear model of the steam drum boiler equation. The computation to estimate the height of water level and the temperature of steam was by simulation using Matlab software. Thus an error was observed between the determined water level and the steam temperature, and that of estimated water level and steam temperature. The result of simulation by Ensemble Kalman Filter Square Root (EnKF-SR) on the nonlinear model of steam drum boiler showed that the error was less than 2%. The implementation of EnKF-SR on the steam drum boiler r model comprises of three simulations, each of which generates 200, 300 and 400 ensembles. The best simulation exhibited the error between the real condition and the estimated result, by generating 400 ensemble. The simulation in water level in order of 0.00002145 m, whereas in the steam temperature was some 0.00002121 kelvin.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Low complexity adaptive equalizers for underwater acoustic communications
NASA Astrophysics Data System (ADS)
Soflaei, Masoumeh; Azmi, Paeiz
2014-08-01
Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.
Multiple point least squares equalization in a room
NASA Technical Reports Server (NTRS)
Elliott, S. J.; Nelson, P. A.
1988-01-01
Equalization filters designed to minimize the mean square error between a delayed version of the original electrical signal and the equalized response at a point in a room have previously been investigated. In general, such a strategy degrades the response at positions in a room away from the equalization point. A method is presented for designing an equalization filter by adjusting the filter coefficients to minimize the sum of the squares of the errors between the equalized responses at multiple points in the room and delayed versions of the original, electrical signal. Such an equalization filter can give a more uniform frequency response over a greater volume of the enclosure than can the single point equalizer above. Computer simulation results are presented of equalizing the frequency responses from a loudspeaker to various typical ear positions, in a room with dimensions and acoustic damping typical of a car interior, using the two approaches outlined above. Adaptive filter algorithms, which can automatically adjust the coefficients of a digital equalization filter to achieve this minimization, will also be discussed.
A deep proper motion catalog within the Sloan digital sky survey footprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munn, Jeffrey A.; Harris, Hugh C.; Tilleman, Trudy M.
2014-12-01
A new proper motion catalog is presented, combining the Sloan Digital Sky Survey (SDSS) with second epoch observations in the r band within a portion of the SDSS imaging footprint. The new observations were obtained with the 90prime camera on the Steward Observatory Bok 90 inch telescope, and the Array Camera on the U.S. Naval Observatory, Flagstaff Station, 1.3 m telescope. The catalog covers 1098 square degrees to r = 22.0, an additional 1521 square degrees to r = 20.9, plus a further 488 square degrees of lesser quality data. Statistical errors in the proper motions range from 5 masmore » year{sup −1} at the bright end to 15 mas year{sup −1} at the faint end, for a typical epoch difference of six years. Systematic errors are estimated to be roughly 1 mas year{sup −1} for the Array Camera data, and as much as 2–4 mas year{sup −1} for the 90prime data (though typically less). The catalog also includes a second epoch of r band photometry.« less
Hao, Z Q; Li, C M; Shen, M; Yang, X Y; Li, K H; Guo, L B; Li, X Y; Lu, Y F; Zeng, X Y
2015-03-23
Laser-induced breakdown spectroscopy (LIBS) with partial least squares regression (PLSR) has been applied to measuring the acidity of iron ore, which can be defined by the concentrations of oxides: CaO, MgO, Al₂O₃, and SiO₂. With the conventional internal standard calibration, it is difficult to establish the calibration curves of CaO, MgO, Al₂O₃, and SiO₂ in iron ore due to the serious matrix effects. PLSR is effective to address this problem due to its excellent performance in compensating the matrix effects. In this work, fifty samples were used to construct the PLSR calibration models for the above-mentioned oxides. These calibration models were validated by the 10-fold cross-validation method with the minimum root-mean-square errors (RMSE). Another ten samples were used as a test set. The acidities were calculated according to the estimated concentrations of CaO, MgO, Al₂O₃, and SiO₂ using the PLSR models. The average relative error (ARE) and RMSE of the acidity achieved 3.65% and 0.0048, respectively, for the test samples.
Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping
2005-11-01
The nondestructive method for quantifying sugar content (SC) and available acid (VA) of intact apples using diffuse near infrared reflectance and optical fiber sensing techniques were explored in the present research. The standard sample sets and prediction models were established by partial least squares analysis (PLS). A total of 120 Shandong Fuji apples were tested in the wave number of 12,500 - 4000 cm(-1) using Fourier transform near infrared spectroscopy. The results of the research indicated that the nondestructive quantification of SC and VA, gave a high correlation coefficient 0.970 and 0.906, a low root mean square error of prediction (RMSEP) 0.272 and 0.056 2, a low root mean square error of calibration (RMSEC) 0.261 and 0.0677, and a small difference between RMSEP and RMSEC 0.011 a nd 0.0115. It was suggested that the diffuse nearinfrared reflectance technique be feasible for nondestructive determination of apple sugar content in the wave number range of 10,341 - 5461 cm(-1) and for available acid in the wave number range of 10,341 - 3818 cm(-1).
Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul
2016-01-01
Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm -1 with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R 2 ) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.
Fadzillah, Nurrulhidayah Ahmad; Man, Yaakob bin Che; Rohman, Abdul; Rosman, Arieff Salleh; Ismail, Amin; Mustafa, Shuhaimi; Khatib, Alfi
2015-01-01
The authentication of food products from the presence of non-allowed components for certain religion like lard is very important. In this study, we used proton Nuclear Magnetic Resonance ((1)H-NMR) spectroscopy for the analysis of butter adulterated with lard by simultaneously quantification of all proton bearing compounds, and consequently all relevant sample classes. Since the spectra obtained were too complex to be analyzed visually by the naked eyes, the classification of spectra was carried out.The multivariate calibration of partial least square (PLS) regression was used for modelling the relationship between actual value of lard and predicted value. The model yielded a highest regression coefficient (R(2)) of 0.998 and the lowest root mean square error calibration (RMSEC) of 0.0091% and root mean square error prediction (RMSEP) of 0.0090, respectively. Cross validation testing evaluates the predictive power of the model. PLS model was shown as good models as the intercept of R(2)Y and Q(2)Y were 0.0853 and -0.309, respectively.
Single-Event Effect Performance of a Conductive-Bridge Memory EEPROM
NASA Technical Reports Server (NTRS)
Chen, Dakai; Wilcox, Edward; Berg, Melanie; Kim, Hak; Phan, Anthony; Figueiredo, Marco; Seidleck, Christina; LaBel, Kenneth
2015-01-01
We investigated the heavy ion single-event effect (SEE) susceptibility of the industry’s first stand-alone memory based on conductive-bridge memory (CBRAM) technology. The device is available as an electrically erasable programmable read-only memory (EEPROM). We found that single-event functional interrupt (SEFI) is the dominant SEE type for each operational mode (standby, dynamic read, and dynamic write/read). SEFIs occurred even while the device is statically biased in standby mode. Worst case SEFIs resulted in errors that filled the entire memory space. Power cycle did not always clear the errors. Thus the corrupted cells had to be reprogrammed in some cases. The device is also vulnerable to bit upsets during dynamic write/read tests, although the frequency of the upsets are relatively low. The linear energy transfer threshold for cell upset is between 10 and 20 megaelectron volts per square centimeter per milligram, with an upper limit cross section of 1.6 times 10(sup -11) square centimeters per bit (95 percent confidence level) at 10 megaelectronvolts per square centimeter per milligram. In standby mode, the CBRAM array appears invulnerable to bit upsets.
Li, Kaiyue; Wang, Weiying; Liu, Yanping; Jiang, Su; Huang, Guo; Ye, Liming
2017-01-01
The active ingredients and thus pharmacological efficacy of traditional Chinese medicine (TCM) at different degrees of parching process vary greatly. Near-infrared spectroscopy (NIR) was used to develop a new method for rapid online analysis of TCM parching process, using two kinds of chemical indicators (5-(hydroxymethyl) furfural [5-HMF] content and 420 nm absorbance) as reference values which were obviously observed and changed in most TCM parching process. Three representative TCMs, Areca ( Areca catechu L.), Malt ( Hordeum Vulgare L.), and Hawthorn ( Crataegus pinnatifida Bge.), were used in this study. With partial least squares regression, calibration models of NIR were generated based on two kinds of reference values, i.e. 5-HMF contents measured by high-performance liquid chromatography (HPLC) and 420 nm absorbance measured by ultraviolet-visible spectroscopy (UV/Vis), respectively. In the optimized models for 5-HMF, the root mean square errors of prediction (RMSEP) for Areca, Malt, and Hawthorn was 0.0192, 0.0301, and 0.2600 and correlation coefficients ( R cal ) were 99.86%, 99.88%, and 99.88%, respectively. Moreover, in the optimized models using 420 nm absorbance as reference values, the RMSEP for Areca, Malt, and Hawthorn was 0.0229, 0.0096, and 0.0409 and R cal were 99.69%, 99.81%, and 99.62%, respectively. NIR models with 5-HMF content and 420 nm absorbance as reference values can rapidly and effectively identify three kinds of TCM in different parching processes. This method has great promise to replace current subjective color judgment and time-consuming HPLC or UV/Vis methods and is suitable for rapid online analysis and quality control in TCM industrial manufacturing process. Near-infrared spectroscopy.(NIR) was used to develop a new method for online analysis of traditional Chinese medicine.(TCM) parching processCalibration and validation models of Areca, Malt, and Hawthorn were generated by partial least squares regression using 5.(hydroxymethyl) furfural contents and 420.nm absorbance as reference values, respectively, which were main indicator components during parching process of most TCMThe established NIR models of three TCMs had low root mean square errors of prediction and high correlation coefficientsThe NIR method has great promise for use in TCM industrial manufacturing processes for rapid online analysis and quality control. Abbreviations used: NIR: Near-infrared Spectroscopy; TCM: Traditional Chinese medicine; Areca: Areca catechu L.; Hawthorn: Crataegus pinnatifida Bge.; Malt: Hordeum vulgare L.; 5-HMF: 5-(hydroxymethyl) furfural; PLS: Partial least squares; D: Dimension faction; SLS: Straight line subtraction, MSC: Multiplicative scatter correction; VN: Vector normalization; RMSECV: Root mean square errors of cross-validation; RMSEP: Root mean square errors of validation; R cal : Correlation coefficients; RPD: Residual predictive deviation; PAT: Process analytical technology; FDA: Food and Drug Administration; ICH: International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use.
Simulator Study of Helmet-Mounted Symbology System Concepts in Degraded Visual Environments.
Cheung, Bob; McKinley, Richard A; Steels, Brad; Sceviour, Robert; Cosman, Vaughn; Holst, Peter
2015-07-01
A sudden loss of external visual cues during critical phases of flight results in spatial disorientation. This is due to undetected horizontal and vertical drift when there is little tolerance for error and correction delay as the helicopter is close to the ground. Three helmet-mounted symbology system concepts were investigated in the simulator as potential solutions for the legacy Griffon helicopters. Thirteen Royal Canadian Air Force (RCAF) Griffon pilots were exposed to the Helmet Display Tracking System for Degraded Visual Environments (HDTS), the BrownOut Symbology System (BOSS), and the current RCAF AVS7 symbology system. For each symbology system, the pilot performed a two-stage departure and a single-stage approach. The presentation order of the symbology systems was randomized. Objective performance metrics included aircraft speed, altitude, attitude, and distance from the landing point. Subjective measurements included situation awareness, mental effort, perceived performance, perceptual cue rating, and NASA Task Load Index. Repeated measures analysis of variance and subsequent planned comparison for all the objective and subjective measurements were performed between the AVS7, HDTS, and BOSS. Our results demonstrated that HDTS and BOSS showed general improvement over AVS7 in two-stage departure. However, only HDTS performed significantly better in heading error than AVS7. During the single-stage approach, BOSS performed worse than AVS7 in heading root mean square error, and only HDTS performed significantly better in distance to landing point and approach heading than the others. Both the HDTS and BOSS possess their own limitations; however, HDTS is the pilots' preferred flight display.
In-Flight Study of Helmet-Mounted Symbology System Concepts in Degraded Visual Environments.
Cheung, Bob; Craig, Gregory; Steels, Brad; Sceviour, Robert; Cosman, Vaughn; Jennings, Sion; Holst, Peter
2015-08-01
During approach and departure in rotary wing aircraft, a sudden loss of external visual reference precipitates spatial disorientation. There were 10 Royal Canadian Air Force (RCAF) Griffon pilots who participated in an in-flight investigation of a 3-dimensional conformal Helmet Display Tracking System (HDTS) and the BrownOut Symbology System (BOSS) aboard an Advanced System Research Aircraft. For each symbology system, pilots performed a two-stage departure followed by a single-stage approach. The presentation order of the two symbology systems was randomized across the pilots. Subjective measurements included situation awareness, mental effort, perceived performance, perceptual cue rating, NASA Task Load Index, and physiological response. Objective performance included aircraft speed, altitude, attitude, and distance from the landing point, control position, and control activity. Repeated measures analysis of variance and planned comparison tests for the subjective and objective responses were performed. For both maneuvers, the HDTS system afforded better situation awareness, lower workload, better perceptual cueing in attitude, horizontal and vertical translation, and lower overall workload index. During the two-stage departure, HDTS achieved less lateral drift from initial takeoff and hover, lower root mean square error (RMSE) in altitude during hover, and lower track error during the acceleration to forward flight. During the single-stage approach, HDTS achieved less error in lateral and longitudinal position offset from the landing point and lower RMSE in heading. In both maneuvers, pilots exhibited higher control activity when using HDTS, which suggested that more pertinent information was available to the pilots. Pilots preferred the HDTS system.
Modeling level change in Lake Urmia using hybrid artificial intelligence approaches
NASA Astrophysics Data System (ADS)
Esbati, M.; Ahmadieh Khanesar, M.; Shahzadi, Ali
2017-06-01
The investigation of water level fluctuations in lakes for protecting them regarding the importance of these water complexes in national and regional scales has found a special place among countries in recent years. The importance of the prediction of water level balance in Lake Urmia is necessary due to several-meter fluctuations in the last decade which help the prevention from possible future losses. For this purpose, in this paper, the performance of adaptive neuro-fuzzy inference system (ANFIS) for predicting the lake water level balance has been studied. In addition, for the training of the adaptive neuro-fuzzy inference system, particle swarm optimization (PSO) and hybrid backpropagation-recursive least square method algorithm have been used. Moreover, a hybrid method based on particle swarm optimization and recursive least square (PSO-RLS) training algorithm for the training of ANFIS structure is introduced. In order to have a more fare comparison, hybrid particle swarm optimization and gradient descent are also applied. The models have been trained, tested, and validated based on lake level data between 1991 and 2014. For performance evaluation, a comparison is made between these methods. Numerical results obtained show that the proposed methods with a reasonable error have a good performance in water level balance prediction. It is also clear that with continuing the current trend, Lake Urmia will experience more drop in the water level balance in the upcoming years.
[Determination of Hard Rate of Alfalfa (Medicago sativa L.) Seeds with Near Infrared Spectroscopy].
Wang, Xin-xun; Chen, Ling-ling; Zhang, Yun-wei; Mao, Pei-sheng
2016-03-01
Alfalfa (Medicago sativa L.) is the most commonly grown forage crop due to its better quality characteristics and high adaptability in China. However, there was 20%-80% hard seeds in alfalfa which could not be identified easily from non hard seeds which would cause the loss of seed utilization value and plant production. This experiment was designed for 121 samples of alfalfa. Seeds were collected according to different regions, harvested year and varieties. 31 samples were artificial matched as hard rates ranging from 20% to 80% to establish a model for hard seed rate by near infrared spectroscopy (NIRS) with Partial Least Square (PLS). The objective of this study was to establish a model and to estimate the efficiency of NIRS for determining hard rate of alfalfa seeds. The results showed that the correlation coefficient (R2(cal)) of calibration model was 0.981 6, root mean square error of cross validation (RMSECV) was 5.32, and the ratio of prediction to deviation (RPD) was 3.58. The forecast model in this experiment presented the satisfied precision. The proposed method using NIRS technology is feasible for identification and classification of hard seed in alfalfa. A new method, as nondestructive testing of hard seed rate, was provided to theoretical basis for fast nondestructive detection of hard seed rates in alfalfa.
Costa, Márcio Holsbach
2017-12-01
Feedback cancellation in a hearing aid is essential for achieving high maximum stable gain to compensate for the losses in severe to profound hearing impaired people. The performance of adaptive feedback cancellers has been studied by assuming that the feedback path can be modeled as a linear system. However, limited dynamic range, low-cost loudspeakers, and nonlinear power amplifiers may distort the hearing aid output signal. In this way, linear-based predictions of the canceller performance may lead to significant deviations from its actual behavior. This work presents a theoretical performance analysis of a Least Mean Square based shadow filter that is applied to set up the coefficients of a feedback canceller, which is subject to a static saturation type nonlinearity at the output of the direct path. Deterministic recursive equations are derived to predict the mean square feedback error and the mean coefficient vector evolution between updates of the feedback canceller. These models are defined as functions of the canceller parameters and input signal statistics. Comparisons with Monte Carlo simulations show the provided models are highly accurate under the considered assumptions. The developed models allow inferences about the potential impact of an overdriven loudspeaker over the transient performance of the direct method feedback canceller, serving as insightful tools for understanding the involved mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.
Li, Qing-bo; Liu, Jie-qiang; Li, Xiang
2012-03-01
A small non-invasive measurement system for human blood glucose has been developed, which can achieve fast, real-time and non invasive measurement of human blood glucose. The device is mainly composed of four parts, i. e. fixture, light system, data acquisition and processing systems, and spectrometer. A new scheme of light source driving was proposed, which can meet the requirements of light source under a variety of conditions of spectral acquisition. An integrated fixture design was proposed, which not only simplifies the optical structure of the system, but also improves the reproducibility of measurement conditions. The micro control system mainly achieves control function, dealing with data, data storage and so on. As the most important component, microprocessor DSP TMS320F2812 has many advantages, such as low power, high processing speed, high computing ability and so on. Wavelet denoising is used to pretreat the spectral data, which can decrease the loss of incident light and improve the signal-to-noise ratio. Kernel partial least squares method was adopted to build the mathematical model, which can improve the precision of the system. In the calibration experiment of the system, the standard values were measured by One-Touch. The correlation coefficient between standard blood glucose values and truth values is 0.95. The root mean square error of measurement is 0.6 mmol x L(-1). The system has good reproducibility.
A second-order all-digital phase-locked loop
NASA Technical Reports Server (NTRS)
Holmes, J. K.; Tegnelia, C. R.
1974-01-01
A simple second-order digital phase-locked loop has been designed to synchronize itself to a square-wave subcarrier. Analysis and experimental performance are given for both acquisition behavior and steady-state phase error performance. In addition, the damping factor and the noise bandwidth are derived analytically. Although all the data are given for the square-wave subcarrier case, the results are applicable to arbitrary subcarriers that are odd symmetric about their transition region.
A weak Galerkin least-squares finite element method for div-curl systems
NASA Astrophysics Data System (ADS)
Li, Jichun; Ye, Xiu; Zhang, Shangyou
2018-06-01
In this paper, we introduce a weak Galerkin least-squares method for solving div-curl problem. This finite element method leads to a symmetric positive definite system and has the flexibility to work with general meshes such as hybrid mesh, polytopal mesh and mesh with hanging nodes. Error estimates of the finite element solution are derived. The numerical examples demonstrate the robustness and flexibility of the proposed method.
Certification in Structural Health Monitoring Systems
2011-09-01
validation [3,8]. This may be accomplished by computing the sum of squares of pure error ( SSPE ) and its associated squared correlation [3,8]. To compute...these values, a cross- validation sample must be established. In general, if the SSPE is high, the model does not predict well on independent data...plethora of cross- validation methods, some of which are more useful for certain models than others [3,8]. When possible, a disclosure of the SSPE
Ameliorating slice gaps in multislice magnetic resonance images: an interpolation scheme.
Kashou, Nasser H; Smith, Mark A; Roberts, Cynthia J
2015-01-01
Standard two-dimension (2D) magnetic resonance imaging (MRI) clinical acquisition protocols utilize orthogonal plane images which contain slice gaps (SG). The purpose of this work is to introduce a novel interpolation method for these orthogonal plane MRI 2D datasets. Three goals can be achieved: (1) increasing the resolution based on a priori knowledge of scanning protocol, (2) ameliorating the loss of data as a result of SG and (3) reconstructing a three-dimension (3D) dataset from 2D images. MRI data was collected using a 3T GE scanner and simulated using Matlab. The procedure for validating the MRI data combination algorithm was performed using a Shepp-Logan and a Gaussian phantom in both 2D and 3D of varying matrix sizes (64-512), as well as on one MRI dataset of a human brain and on an American College of Radiology magnetic resonance accreditation phantom. The squared error and mean squared error were computed in comparing this scheme to common interpolating functions employed in MR consoles and workstations. The mean structure similarity matrix was computed in 2D as a means of qualitative image assessment. Additionally, MRI scans were used for qualitative assessment of the method. This new scheme was consistently more accurate than upsampling each orientation separately and averaging the upsampled data. An efficient new interpolation approach to resolve SG was developed. This scheme effectively fills in the missing data points by using orthogonal plane images. To date, there have been few attempts to combine the information of three MRI plane orientations using brain images. This has specific applications for clinical MRI, functional MRI, diffusion-weighted imaging/diffusion tensor imaging and MR angiography where 2D slice acquisition are used. In these cases, the 2D data can be combined using our method in order to obtain 3D volume.
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Wang, Junyi; Pan, Zhongqi
2017-01-01
Spatial-division multiplexing (SDM) techniques have been purposed to increase the capacity of optical fiber transmission links by utilizing multicore fibers or few-mode fibers (FMF). The most challenging impairments of SDMbased long-haul optical links mainly include modal dispersion and mode-dependent loss (MDL), whereas MDL arises from inline component imperfections, and breaks modal orthogonality thus degrading the capacity of multiple-inputmultiple- output (MIMO) receivers. To reduce MDL, optical approaches include mode scramblers and specialty fiber designs, yet these methods were burdened with high cost, yet cannot completely remove the accumulated MDL in the link. Besides, space-time trellis codes (STTC) were purposed to lessen MDL, but suffered from high complexity. In this work, we investigated the performance of space-time block-coding (STBC) scheme to mitigate MDL in SDM-based optical communication by exploiting space and delay diversity, whereas weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive-least-squares (RLS) algorithm for convergence and channel estimation. The STBC was evaluated in a six-mode multiplexed system over 30-km FMF via 6×6 MIMO FDE, with modal gain offset 3 dB, core refractive index 1.49, numerical aperture 0.5. Results show that optical-signal-to-noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16- and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE). Besides, we also evaluate the complexity optimization of STBC decoding scheme with zero-forcing decision feedback (ZFDF) equalizer by shortening the coding slot length, which is robust to frequency-selective fading channels, and can be scaled up for SDM systems with more dynamic channels.
Mutual information estimation for irregularly sampled time series
NASA Astrophysics Data System (ADS)
Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.
2012-04-01
For the automated, objective and joint analysis of time series, similarity measures are crucial. Used in the analysis of climate records, they allow for a complimentary, unbiased view onto sparse datasets. The irregular sampling of many of these time series, however, makes it necessary to either perform signal reconstruction (e.g. interpolation) or to develop and use adapted measures. Standard linear interpolation comes with an inevitable loss of information and bias effects. We have recently developed a Gaussian kernel-based correlation algorithm with which the interpolation error can be substantially lowered, but this would not work should the functional relationship in a bivariate setting be non-linear. We therefore propose an algorithm to estimate lagged auto and cross mutual information from irregularly sampled time series. We have extended the standard and adaptive binning histogram estimators and use Gaussian distributed weights in the estimation of the (joint) probabilities. To test our method we have simulated linear and nonlinear auto-regressive processes with Gamma-distributed inter-sampling intervals. We have then performed a sensitivity analysis for the estimation of actual coupling length, the lag of coupling and the decorrelation time in the synthetic time series and contrast our results to the performance of a signal reconstruction scheme. Finally we applied our estimator to speleothem records. We compare the estimated memory (or decorrelation time) to that from a least-squares estimator based on fitting an auto-regressive process of order 1. The calculated (cross) mutual information results are compared for the different estimators (standard or adaptive binning) and contrasted with results from signal reconstruction. We find that the kernel-based estimator has a significantly lower root mean square error and less systematic sampling bias than the interpolation-based method. It is possible that these encouraging results could be further improved by using non-histogram mutual information estimators, like k-Nearest Neighbor or Kernel-Density estimators, but for short (<1000 points) and irregularly sampled datasets the proposed algorithm is already a great improvement.
Effects of free convection and friction on heat-pulse flowmeter measurement
NASA Astrophysics Data System (ADS)
Lee, Tsai-Ping; Chia, Yeeping; Chen, Jiun-Szu; Chen, Hongey; Liu, Chen-Wuing
2012-03-01
SummaryHeat-pulse flowmeter can be used to measure low flow velocities in a borehole; however, bias in the results due to measurement error is often encountered. A carefully designed water circulation system was established in the laboratory to evaluate the accuracy and precision of flow velocity measured by heat-pulse flowmeter in various conditions. Test results indicated that the coefficient of variation for repeated measurements, ranging from 0.4% to 5.8%, tends to increase with flow velocity. The measurement error increases from 4.6% to 94.4% as the average flow velocity decreases from 1.37 cm/s to 0.18 cm/s. We found that the error resulted primarily from free convection and frictional loss. Free convection plays an important role in heat transport at low flow velocities. Frictional effect varies with the position of measurement and geometric shape of the inlet and flow-through cell of the flowmeter. Based on the laboratory test data, a calibration equation for the measured flow velocity was derived by the least-squares regression analysis. When the flowmeter is used with a diverter, the range of measured flow velocity can be extended, but the measurement error and the coefficient of variation due to friction increase significantly. At higher velocities under turbulent flow conditions, the measurement error is greater than 100%. Our laboratory experimental results suggested that, to avoid a large error, the heat-pulse flowmeter measurement is better conducted in laminar flow and the effect of free convection should be eliminated at any flow velocities. Field measurement of the vertical flow velocity using the heat-pulse flowmeter was tested in a monitoring well. The calibration of measured velocities not only improved the contrast in hydraulic conductivity between permeable and less permeable layers, but also corrected the inconsistency between the pumping rate and the measured flow rate. We identified two highly permeable sections where the horizontal hydraulic conductivity is 3.7-6.4 times of the equivalent hydraulic conductivity obtained from the pumping test. The field test results indicated that, with a proper calibration, the flowmeter measurement is capable of characterizing the vertical distribution of preferential flow or hydraulic conductivity.
Circulation patterns in the deep Subtropical Northeast Atlantic with ARGO data
NASA Astrophysics Data System (ADS)
Calheiros, Tomas; Bashmachnikov, Igor
2014-05-01
In this work we study the dominant circulation patterns in the Subtropical Northeast Atlantic using ARGO data [25-45o N, 5-35o W]. The data were obtained from the Coriolis operational data center (ftp://ftp.ifremer.fr) for the years 1999-2013. During this period of time in the study there were available area 376 floats with 15062 float-months of total time. The floats were launched in the depths range between 300 and 2000 m, but most of the floats were concentrated at 1000 m (2000 float-months) and 1500 m (3400 float-months). In the upper 400-m layer there were also about 1000 float-months, but their number and distribution did not allow analysis of the mean currents over the study region. For each float position Lagrangian current velocity was computed as the difference between the position when the buoy started sinking to the reference depth and the consequent position of surfacing of the float, divided by the respective time interval. This allowed reducing the noise related with sea-surface drift of the buoys during the data-transmission periods. Mean Eulerian velocity and its error were computed in each of the 2ox2o square. Whenever in a 2ox2o square more than 150 observations of the Lagrangian velocity were available, the square was split into 4 smaller 1ox1o squares, in each of which the mean Eulerian velocities and their errors were estimated. Eulerian currents at 1000 m, as well as at 1500 m depth formed an overall anticyclonic circulation pattern in the study region. The modal velocity of all buoys at 1000 m level was 4 cm/s with an error of the mean of 1.8 cm/s. The modal velocity of all buoys at 1500m was 3 cm/s with an error of the mean of 1.4 cm/s. The southwestward flows near the Madeira Island and further westwards flow along the zonal band of 25-30o N at 1500 m depth well corresponded to the extension of the deep fraction of the Mediterranean Water salt tong.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Perceptual Color Characterization of Cameras
Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo
2014-01-01
Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Eliciting Naturalistic Cortical Responses with a Sensory Prosthesis via Optimized Microstimulation
2016-08-12
error and correlation as metrics amenable to highly efficient convex optimization. This study concentrates on characterizing the neural responses to both...spiking signal. For LFP, distance measures such as the traditional mean-squared error and cross- correlation can be used, whereas distances between spike...with parameters that describe their associated temporal dynamics and relations to the observed output. A description of the model follows, but we
Theoretical Bounds of Direct Binary Search Halftoning.
Liao, Jan-Ray
2015-11-01
Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchhoff, William H.
2012-09-15
The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
Post-processing through linear regression
NASA Astrophysics Data System (ADS)
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Modified fast frequency acquisition via adaptive least squares algorithm
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1992-01-01
A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1991-01-01
A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.
Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis
2017-03-01
A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.
Application of least-squares fitting of ellipse and hyperbola for two dimensional data
NASA Astrophysics Data System (ADS)
Lawiyuniarti, M. P.; Rahmadiantri, E.; Alamsyah, I. M.; Rachmaputri, G.
2018-01-01
Application of the least-square method of ellipse and hyperbola for two-dimensional data has been applied to analyze the spatial continuity of coal deposits in the mining field, by using the fitting method introduced by Fitzgibbon, Pilu, and Fisher in 1996. This method uses 4{a_0}{a_2} - a_12 = 1 as a constrain function. Meanwhile, in 1994, Gander, Golub and Strebel have introduced ellipse and hyperbola fitting methods using the singular value decomposition approach. This SVD approach can be generalized into a three-dimensional fitting. In this research we, will discuss about those two fitting methods and apply it to four data content of coal that is in the form of ash, calorific value, sulfur and thickness of seam so as to produce form of ellipse or hyperbola. In addition, we compute the error difference resulting from each method and from that calculation, we conclude that although the errors are not much different, the error of the method introduced by Fitzgibbon et al is smaller than the fitting method that introduced by Golub et al.
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
NASA Astrophysics Data System (ADS)
Lee, Eunji; Park, Sang-Young; Shin, Bumjoon; Cho, Sungki; Choi, Eun-Jung; Jo, Junghyun; Park, Jang-Hyun
2017-03-01
The optical wide-field patrol network (OWL-Net) is a Korean optical surveillance system that tracks and monitors domestic satellites. In this study, a batch least squares algorithm was developed for optical measurements and verified by Monte Carlo simulation and covariance analysis. Potential error sources of OWL-Net, such as noise, bias, and clock errors, were analyzed. There is a linear relation between the estimation accuracy and the noise level, and the accuracy significantly depends on the declination bias. In addition, the time-tagging error significantly degrades the observation accuracy, while the time-synchronization offset corresponds to the orbital motion. The Cartesian state vector and measurement bias were determined using the OWL-Net tracking data of the KOMPSAT-1 and Cryosat-2 satellites. The comparison with known orbital information based on two-line elements (TLE) and the consolidated prediction format (CPF) shows that the orbit determination accuracy is similar to that of TLE. Furthermore, the precision and accuracy of OWL-Net observation data were determined to be tens of arcsec and sub-degree level, respectively.
NASA Technical Reports Server (NTRS)
Kumar, Rajendra (Inventor)
1990-01-01
A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.
NASA Astrophysics Data System (ADS)
Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said
2018-06-01
Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.
Modeling of surface dust concentrations using neural networks and kriging
NASA Astrophysics Data System (ADS)
Buevich, Alexander G.; Medvedev, Alexander N.; Sergeev, Alexander P.; Tarasov, Dmitry A.; Shichkin, Andrey V.; Sergeeva, Marina V.; Atanasova, T. B.
2016-12-01
Creating models which are able to accurately predict the distribution of pollutants based on a limited set of input data is an important task in environmental studies. In the paper two neural approaches: (multilayer perceptron (MLP)) and generalized regression neural network (GRNN)), and two geostatistical approaches: (kriging and cokriging), are using for modeling and forecasting of dust concentrations in snow cover. The area of study is under the influence of dust emissions from a copper quarry and a several industrial companies. The comparison of two mentioned approaches is conducted. Three indices are used as the indicators of the models accuracy: the mean absolute error (MAE), root mean square error (RMSE) and relative root mean square error (RRMSE). Models based on artificial neural networks (ANN) have shown better accuracy. When considering all indices, the most precision model was the GRNN, which uses as input parameters for modeling the coordinates of sampling points and the distance to the probable emissions source. The results of work confirm that trained ANN may be more suitable tool for modeling of dust concentrations in snow cover.
Xia, Youshen; Kamel, Mohamed S
2007-06-01
Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, X. R.; Wang, X.
2016-03-01
When using the genetic algorithm to solve the problem of too-short-arc (TSA) determination, due to the difference of computing processes between the genetic algorithm and classical method, the methods for outliers editing are no longer applicable. In the genetic algorithm, the robust estimation is acquired by means of using different loss functions in the fitness function, then the outlier problem of TSAs is solved. Compared with the classical method, the application of loss functions in the genetic algorithm is greatly simplified. Through the comparison of results of different loss functions, it is clear that the methods of least median square and least trimmed square can greatly improve the robustness of TSAs, and have a high breakdown point.
The interaction of spacecraft high voltage power systems with the space plasma environment
NASA Technical Reports Server (NTRS)
Domitz, S.; Grier, N. T.
1974-01-01
Research work has shown that the interaction of a spacecraft and its high voltage power systems with the space plasma environment can result in harmful power loss and damage to insulators and metal surfaces. Insulator and solar panel tests were performed and flight tests are planned. High voltage power processing equipment was shown to be affected by power loss, and by transients due to plasma interactions. Power loss was determined to be roughly proportional to the square of the voltage and increases approximately as the square root of the area. Kapton, Teflon, and glass were found to be satisfactory insulating materials and it is concluded that for large space power stations should consider the effect of large pinhole currents.
Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing
2014-10-01
Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng
2018-05-31
For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
Zeb, Salman; Yousaf, Muhammad
2017-01-01
In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.
Explicit least squares system parameter identification for exact differential input/output models
NASA Technical Reports Server (NTRS)
Pearson, A. E.
1993-01-01
The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.
Evidence of Selection against Complex Mitotic-Origin Aneuploidy during Preimplantation Development
McCoy, Rajiv C.; Demko, Zachary P.; Ryan, Allison; Banjevic, Milena; Hill, Matthew; Sigurjonsson, Styrmir; Rabinowitz, Matthew; Petrov, Dmitri A.
2015-01-01
Whole-chromosome imbalances affect over half of early human embryos and are the leading cause of pregnancy loss. While these errors frequently arise in oocyte meiosis, many such whole-chromosome abnormalities affecting cleavage-stage embryos are the result of chromosome missegregation occurring during the initial mitotic cell divisions. The first wave of zygotic genome activation at the 4–8 cell stage results in the arrest of a large proportion of embryos, the vast majority of which contain whole-chromosome abnormalities. Thus, the full spectrum of meiotic and mitotic errors can only be detected by sampling after the initial cell divisions, but prior to this selective filter. Here, we apply 24-chromosome preimplantation genetic screening (PGS) to 28,052 single-cell day-3 blastomere biopsies and 18,387 multi-cell day-5 trophectoderm biopsies from 6,366 in vitro fertilization (IVF) cycles. We precisely characterize the rates and patterns of whole-chromosome abnormalities at each developmental stage and distinguish errors of meiotic and mitotic origin without embryo disaggregation, based on informative chromosomal signatures. We show that mitotic errors frequently involve multiple chromosome losses that are not biased toward maternal or paternal homologs. This outcome is characteristic of spindle abnormalities and chaotic cell division detected in previous studies. In contrast to meiotic errors, our data also show that mitotic errors are not significantly associated with maternal age. PGS patients referred due to previous IVF failure had elevated rates of mitotic error, while patients referred due to recurrent pregnancy loss had elevated rates of meiotic error, controlling for maternal age. These results support the conclusion that mitotic error is the predominant mechanism contributing to pregnancy losses occurring prior to blastocyst formation. This high-resolution view of the full spectrum of whole-chromosome abnormalities affecting early embryos provides insight into the cytogenetic mechanisms underlying their formation and the consequences for human fertility. PMID:26491874
Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.
Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro
2018-05-28
In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.
A comparative study of optimum and suboptimum direct-detection laser ranging receivers
NASA Technical Reports Server (NTRS)
Abshire, J. B.
1978-01-01
A summary of previously proposed receiver strategies for direct-detection laser ranging receivers is presented. Computer simulations are used to compare performance of candidate implementation strategies in the 1- to 100-photoelectron region. Under the condition of no background radiation, the maximum-likelihood and minimum mean-square error estimators were found to give the same performance for both bell-shaped and rectangular optical-pulse shapes. For signal energies greater than 100 photoelectrons, the root-mean-square range error is shown to decrease as Q to the -1/2 power for bell-shaped pulses and Q to the -1 power for rectangular pulses, where Q represents the average pulse energy. Of several receiver implementations presented, the matched-filter peak detector was found to be preferable. A similar configuration, using a constant-fraction discriminator, exhibited a signal-level dependent time bias.
NASA Astrophysics Data System (ADS)
Bunai, Tasya; Rokhmatuloh; Wibowo, Adi
2018-05-01
In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.
NASA Astrophysics Data System (ADS)
Zimina, S. V.
2015-06-01
We present the results of statistical analysis of an adaptive antenna array tuned using the least-mean-square error algorithm with quadratic constraint on the useful-signal amplification with allowance for the weight-coefficient fluctuations. Using the perturbation theory, the expressions for the correlation function and power of the output signal of the adaptive antenna array, as well as the formula for the weight-vector covariance matrix are obtained in the first approximation. The fluctuations are shown to lead to the signal distortions at the antenna-array output. The weight-coefficient fluctuations result in the appearance of additional terms in the statistical characteristics of the antenna array. It is also shown that the weight-vector fluctuations are isotropic, i.e., identical in all directions of the weight-coefficient space.
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
Weissman-Miller, Deborah
2013-11-02
Point estimation is particularly important in predicting weight loss in individuals or small groups. In this analysis, a new health response function is based on a model of human response over time to estimate long-term health outcomes from a change point in short-term linear regression. This important estimation capability is addressed for small groups and single-subject designs in pilot studies for clinical trials, medical and therapeutic clinical practice. These estimations are based on a change point given by parameters derived from short-term participant data in ordinary least squares (OLS) regression. The development of the change point in initial OLS data and the point estimations are given in a new semiparametric ratio estimator (SPRE) model. The new response function is taken as a ratio of two-parameter Weibull distributions times a prior outcome value that steps estimated outcomes forward in time, where the shape and scale parameters are estimated at the change point. The Weibull distributions used in this ratio are derived from a Kelvin model in mechanics taken here to represent human beings. A distinct feature of the SPRE model in this article is that initial treatment response for a small group or a single subject is reflected in long-term response to treatment. This model is applied to weight loss in obesity in a secondary analysis of data from a classic weight loss study, which has been selected due to the dramatic increase in obesity in the United States over the past 20 years. A very small relative error of estimated to test data is shown for obesity treatment with the weight loss medication phentermine or placebo for the test dataset. An application of SPRE in clinical medicine or occupational therapy is to estimate long-term weight loss for a single subject or a small group near the beginning of treatment.
Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant
2016-05-15
A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.
The Estimation of Gestational Age at Birth in Database Studies.
Eberg, Maria; Platt, Robert W; Filion, Kristian B
2017-11-01
Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.
Schleier, Jerome J.; Peterson, Robert K.D.; Irvine, Kathryn M.; Marshall, Lucy M.; Weaver, David K.; Preftakes, Collin J.
2012-01-01
One of the more effective ways of managing high densities of adult mosquitoes that vector human and animal pathogens is ultra-low-volume (ULV) aerosol applications of insecticides. The U.S. Environmental Protection Agency uses models that are not validated for ULV insecticide applications and exposure assumptions to perform their human and ecological risk assessments. Currently, there is no validated model that can accurately predict deposition of insecticides applied using ULV technology for adult mosquito management. In addition, little is known about the deposition and drift of small droplets like those used under conditions encountered during ULV applications. The objective of this study was to perform field studies to measure environmental concentrations of insecticides and to develop a validated model to predict the deposition of ULV insecticides. The final regression model was selected by minimizing the Bayesian Information Criterion and its prediction performance was evaluated using k-fold cross validation. Density of the formulation and the density and CMD interaction coefficients were the largest in the model. The results showed that as density of the formulation decreases, deposition increases. The interaction of density and CMD showed that higher density formulations and larger droplets resulted in greater deposition. These results are supported by the aerosol physics literature. A k-fold cross validation demonstrated that the mean square error of the selected regression model is not biased, and the mean square error and mean square prediction error indicated good predictive ability.
Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams
NASA Astrophysics Data System (ADS)
Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng
2006-12-01
This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boehnke, E McKenzie; DeMarco, J; Steers, J
2016-06-15
Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readingsmore » are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.« less
NASA Astrophysics Data System (ADS)
Wu, Kang-Hung; Su, Ching-Lun; Chu, Yen-Hsyang
2015-03-01
In this article, we use the International Reference Ionosphere (IRI) model to simulate temporal and spatial distributions of global E region electron densities retrieved by the FORMOSAT-3/COSMIC satellites by means of GPS radio occultation (RO) technique. Despite regional discrepancies in the magnitudes of the E region electron density, the IRI model simulations can, on the whole, describe the COSMIC measurements in quality and quantity. On the basis of global ionosonde network and the IRI model, the retrieval errors of the global COSMIC-measured E region peak electron density (NmE) from July 2006 to July 2011 are examined and simulated. The COSMIC measurement and the IRI model simulation both reveal that the magnitudes of the percentage error (PE) and root mean-square-error (RMSE) of the relative RO retrieval errors of the NmE values are dependent on local time (LT) and geomagnetic latitude, with minimum in the early morning and at high latitudes and maximum in the afternoon and at middle latitudes. In addition, the seasonal variation of PE and RMSE values seems to be latitude dependent. After removing the IRI model-simulated GPS RO retrieval errors from the original COSMIC measurements, the average values of the annual and monthly mean percentage errors of the RO retrieval errors of the COSMIC-measured E region electron density are, respectively, substantially reduced by a factor of about 2.95 and 3.35, and the corresponding root-mean-square errors show averaged decreases of 15.6% and 15.4%, respectively. It is found that, with this process, the largest reduction in the PE and RMSE of the COSMIC-measured NmE occurs at the equatorial anomaly latitudes 10°N-30°N in the afternoon from 14 to 18 LT, with a factor of 25 and 2, respectively. Statistics show that the residual errors that remained in the corrected COSMIC-measured NmE vary in a range of -20% to 38%, which are comparable to or larger than the percentage errors of the IRI-predicted NmE fluctuating in a range of -6.5% to 20%.
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Optimal wavefront control for adaptive segmented mirrors
NASA Technical Reports Server (NTRS)
Downie, John D.; Goodman, Joseph W.
1989-01-01
A ground-based astronomical telescope with a segmented primary mirror will suffer image-degrading wavefront aberrations from at least two sources: (1) atmospheric turbulence and (2) segment misalignment or figure errors of the mirror itself. This paper describes the derivation of a mirror control feedback matrix that assumes the presence of both types of aberration and is optimum in the sense that it minimizes the mean-squared residual wavefront error. Assumptions of the statistical nature of the wavefront measurement errors, atmospheric phase aberrations, and segment misalignment errors are made in the process of derivation. Examples of the degree of correlation are presented for three different types of wavefront measurement data and compared to results of simple corrections.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
A hybrid frame concealment algorithm for H.264/AVC.
Yan, Bo; Gharavi, Hamid
2010-01-01
In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.
Wang, Gang; Teng, Chaolin; Li, Kuo; Zhang, Zhonglin; Yan, Xiangguo
2016-09-01
The recorded electroencephalography (EEG) signals are usually contaminated by electrooculography (EOG) artifacts. In this paper, by using independent component analysis (ICA) and multivariate empirical mode decomposition (MEMD), the ICA-based MEMD method was proposed to remove EOG artifacts (EOAs) from multichannel EEG signals. First, the EEG signals were decomposed by the MEMD into multiple multivariate intrinsic mode functions (MIMFs). The EOG-related components were then extracted by reconstructing the MIMFs corresponding to EOAs. After performing the ICA of EOG-related signals, the EOG-linked independent components were distinguished and rejected. Finally, the clean EEG signals were reconstructed by implementing the inverse transform of ICA and MEMD. The results of simulated and real data suggested that the proposed method could successfully eliminate EOAs from EEG signals and preserve useful EEG information with little loss. By comparing with other existing techniques, the proposed method achieved much improvement in terms of the increase of signal-to-noise and the decrease of mean square error after removing EOAs.