Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons
1989-03-25
error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given
Response Surface Analysis of Experiments with Random Blocks
1988-09-01
partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF
1991-09-01
matrix, the Regression Sum of Squares (SSR) and Error Sum of Squares (SSE) are also displayed as a percentage of the Total Sum of Squares ( SSTO ...vector when the student compares the SSR to the SSE. In addition to the plot, the actual values of SSR, SSE, and SSTO are also provided. Figure 3 gives the...Es ainSpace = E 3 Error- Eor Space =n t! L . Pro~cio q Yonto Pro~rct on of Y onto the simaton, pac ror Space SSR SSEL0.20 IV = 14,1 +IErrorI 2 SSTO
Validating Clusters with the Lower Bound for Sum-of-Squares Error
ERIC Educational Resources Information Center
Steinley, Douglas
2007-01-01
Given that a minor condition holds (e.g., the number of variables is greater than the number of clusters), a nontrivial lower bound for the sum-of-squares error criterion in K-means clustering is derived. By calculating the lower bound for several different situations, a method is developed to determine the adequacy of cluster solution based on…
A suggestion for computing objective function in model calibration
Wu, Yiping; Liu, Shuguang
2014-01-01
A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).
2009-07-16
0.25 0.26 -0.85 1 SSR SSE R SSTO SSTO = = − 2 2 ˆ( ) : Regression sum of square, ˆwhere : mean value, : value from the fitted line ˆ...Error sum of square : Total sum of square i i i i SSR Y Y Y Y SSE Y Y SSTO SSE SSR = − = − = + ∑ ∑ Statistical analysis: Coefficient of correlation
1984-12-01
total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower
NASA Astrophysics Data System (ADS)
Fujii, Kenzo; Yamamoto, Toru
In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.
An Analysis of Escort Formations
1992-03-01
error sum of squares" is denoted by SSPE and calculated by SSpE (yjj -) 2 J (5.13) where j = denotes unique design points, and i = denotes the...observations The difference between SSE and SSPE represents the deviation between the observations and the model due to inadequacies in the model. This...difference is called sum of squares due to lack of fit and denoted by SSLF. 5.18 The ratio of SSLF to SSPE , each divided by its respective degrees of freedom
A Strategy for Replacing Sum Scoring
ERIC Educational Resources Information Center
Ramsay, James O.; Wiberg, Marie
2017-01-01
This article promotes the use of modern test theory in testing situations where sum scores for binary responses are now used. It directly compares the efficiencies and biases of classical and modern test analyses and finds an improvement in the root mean squared error of ability estimates of about 5% for two designed multiple-choice tests and…
Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Navy Fuel Composition and Screening Tool (FCAST) v2.8
2016-05-10
allowed us to develop partial least squares (PLS) models based on gas chromatography–mass spectrometry (GC-MS) data that predict fuel properties. The...Chemometric property modeling Partial least squares PLS Compositional profiler Naval Air Systems Command Air-4.4.5 Patuxent River Naval Air Station Patuxent...Cumulative predicted residual error sum of squares DiEGME Diethylene glycol monomethyl ether FCAST Fuel Composition and Screening Tool FFP Fit for
The microcomputer scientific software series 3: general linear model--analysis of variance.
Harold M. Rauscher
1985-01-01
A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...
Measuring Dispersion Effects of Factors in Factorial Experiments.
1988-01-01
error is MSE =i=l j=1 i n r (SSE/(N-p)), the sum of squares of pure error is SSPE = Z E Y i=1 j=1 and the mean square of pure error is MSPE - ( SSPE /n...the level of the factor in the ith run is 0. 3.1. First Measure We have n r n r SSPE = 1 Is it -yi) 2 + E r (1-8 )(yjj li-l j=l (iYjj +i= j=l l - i...The first component in SSPE corresponds to level I of the factor and has n degrees of freedom ( E 6i)(r-I). The second component corresponds to i=l n
MIMO radar waveform design with peak and sum power constraints
NASA Astrophysics Data System (ADS)
Arulraj, Merline; Jeyaraman, Thiruvengadam S.
2013-12-01
Optimal power allocation for multiple-input multiple-output radar waveform design subject to combined peak and sum power constraints using two different criteria is addressed in this paper. The first one is by maximizing the mutual information between the random target impulse response and the reflected waveforms, and the second one is by minimizing the mean square error in estimating the target impulse response. It is assumed that the radar transmitter has knowledge of the target's second-order statistics. Conventionally, the power is allocated to transmit antennas based on the sum power constraint at the transmitter. However, the wide power variations across the transmit antenna pose a severe constraint on the dynamic range and peak power of the power amplifier at each antenna. In practice, each antenna has the same absolute peak power limitation. So it is desirable to consider the peak power constraint on the transmit antennas. A generalized constraint that jointly meets both the peak power constraint and the average sum power constraint to bound the dynamic range of the power amplifier at each transmit antenna is proposed recently. The optimal power allocation using the concept of waterfilling, based on the sum power constraint, is the special case of p = 1. The optimal solution for maximizing the mutual information and minimizing the mean square error is obtained through the Karush-Kuhn-Tucker (KKT) approach, and the numerical solutions are found through a nested Newton-type algorithm. The simulation results show that the detection performance of the system with both sum and peak power constraints gives better detection performance than considering only the sum power constraint at low signal-to-noise ratio.
ISOFIT - A PROGRAM FOR FITTING SORPTION ISOTHERMS TO EXPERIMENTAL DATA
Isotherm expressions are important for describing the partitioning of contaminants in environmental systems. ISOFIT (ISOtherm FItting Tool) is a software program that fits isotherm parameters to experimental data via the minimization of a weighted sum of squared error (WSSE) obje...
A novel beamformer design method for medical ultrasound. Part I: Theory.
Ranganathan, Karthik; Walker, William F
2003-01-01
The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.
Certification in Structural Health Monitoring Systems
2011-09-01
validation [3,8]. This may be accomplished by computing the sum of squares of pure error ( SSPE ) and its associated squared correlation [3,8]. To compute...these values, a cross- validation sample must be established. In general, if the SSPE is high, the model does not predict well on independent data...plethora of cross- validation methods, some of which are more useful for certain models than others [3,8]. When possible, a disclosure of the SSPE
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1984-01-01
Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.
A Comparison of Latent Growth Models for Constructs Measured by Multiple Items
ERIC Educational Resources Information Center
Leite, Walter L.
2007-01-01
Univariate latent growth modeling (LGM) of composites of multiple items (e.g., item means or sums) has been frequently used to analyze the growth of latent constructs. This study evaluated whether LGM of composites yields unbiased parameter estimates, standard errors, chi-square statistics, and adequate fit indexes. Furthermore, LGM was compared…
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
NASA Astrophysics Data System (ADS)
See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.
2018-04-01
This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.
Mbah, Chamberlain; De Ruyck, Kim; De Schrijver, Silke; De Sutter, Charlotte; Schiettecatte, Kimberly; Monten, Chris; Paelinck, Leen; De Neve, Wilfried; Thierens, Hubert; West, Catharine; Amorim, Gustavo; Thas, Olivier; Veldeman, Liv
2018-05-01
Evaluation of patient characteristics inducing toxicity in breast radiotherapy, using simultaneous modeling of multiple endpoints. In 269 early-stage breast cancer patients treated with whole-breast irradiation (WBI) after breast-conserving surgery, toxicity was scored, based on five dichotomized endpoints. Five logistic regression models were fitted, one for each endpoint and the effect sizes of all variables were estimated using maximum likelihood (MLE). The MLEs are improved with James-Stein estimates (JSEs). The method combines all the MLEs, obtained for the same variable but from different endpoints. Misclassification errors were computed using MLE- and JSE-based prediction models. For associations, p-values from the sum of squares of MLEs were compared with p-values from the Standardized Total Average Toxicity (STAT) Score. With JSEs, 19 highest ranked variables were predictive of the five different endpoints. Important variables increasing radiation-induced toxicity were chemotherapy, age, SATB2 rs2881208 SNP and nodal irradiation. Treatment position (prone position) was most protective and ranked eighth. Overall, the misclassification errors were 45% and 34% for the MLE- and JSE-based models, respectively. p-Values from the sum of squares of MLEs and p-values from STAT score led to very similar conclusions, except for the variables nodal irradiation and treatment position, for which STAT p-values suggested an association with radiosensitivity, whereas p-values from the sum of squares indicated no association. Breast volume was ranked as the most significant variable in both strategies. The James-Stein estimator was used for selecting variables that are predictive for multiple toxicity endpoints. With this estimator, 19 variables were predictive for all toxicities of which four were significantly associated with overall radiosensitivity. JSEs led to almost 25% reduction in the misclassification error rate compared to conventional MLEs. Finally, patient characteristics that are associated with radiosensitivity were identified without explicitly quantifying radiosensitivity.
Choosing the Number of Clusters in K-Means Clustering
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
Steinley (2007) provided a lower bound for the sum-of-squares error criterion function used in K-means clustering. In this article, on the basis of the lower bound, the authors propose a method to distinguish between 1 cluster (i.e., a single distribution) versus more than 1 cluster. Additionally, conditional on indicating there are multiple…
Mishra, Vishal
2015-01-01
The interchange of the protons with the cell wall-bound calcium and magnesium ions at the interface of solution/bacterial cell surface in the biosorption system at various concentrations of protons has been studied in the present work. A mathematical model for establishing the correlation between concentration of protons and active sites was developed and optimized. The sporadic limited residence time reactor was used to titrate the calcium and magnesium ions at the individual data point. The accuracy of the proposed mathematical model was estimated using error functions such as nonlinear regression, adjusted nonlinear regression coefficient, the chi-square test, P-test and F-test. The values of the chi-square test (0.042-0.017), P-test (<0.001-0.04), sum of square errors (0.061-0.016), root mean square error (0.01-0.04) and F-test (2.22-19.92) reported in the present research indicated the suitability of the model over a wide range of proton concentrations. The zeta potential of the bacterium surface at various concentrations of protons was observed to validate the denaturation of active sites.
Oceanographic and meteorological research based on the data products of SEASAT
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1985-01-01
Reservations were expressed concerning the sum of squares wind recovery algorithm and the power law model function. The SAS sum of squares (SOS) method for recovering winds from backscatter data leads to inconsistent results when V pol and H pol winds are compared. A model function that does not use a power law and that accounts for sea surface temperature is needed and is under study both theoretically and by means of the SASS mode 4 data. Aspects of the determination of winds by means of scatterometry and of the utilization of vector wind data for meteorological forecasts are elaborated. The operational aspect of an intermittent assimilation scheme currently utilized for the specification of the initial value field is considered with focus on quantifying the absolute 12-hour linear displacement error of the movement of low centers.
ERIC Educational Resources Information Center
DeTemple, Duane
2010-01-01
Purely combinatorial proofs are given for the sum of squares formula, 1[superscript 2] + 2[superscript 2] + ... + n[superscript 2] = n(n + 1) (2n + 1) / 6, and the sum of sums of squares formula, 1[superscript 2] + (1[superscript 2] + 2[superscript 2]) + ... + (1[superscript 2] + 2[superscript 2] + ... + n[superscript 2]) = n(n + 1)[superscript 2]…
Teaching SSE and Some of Its Applications in Junior High School.
ERIC Educational Resources Information Center
Bernard, John E.
This paper describes the development of curriculum materials for teaching the Sum of Squared Errors (SSE) to one class of 25 eighth graders in Hawaii. Microcomputers were used in class. Prior to explicit introduction of the SSE students were given repeated contact with a data base of various statistics collected from members of their class. Then a…
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Research on the infiltration processes of lawn soils of the Babao River in the Qilian Mountain.
Li, GuangWen; Feng, Qi; Zhang, FuPing; Cheng, AiFang
2014-01-01
Using a Guelph Permeameter, the soil water infiltration processes were analyzed in the Babao River of the Qilian Mountain in China. The results showed that the average soil initial infiltration and the steady infiltration rates in the upstream reaches of the Babao River are 1.93 and 0.99 cm/min, whereas those of the middle area are 0.48 cm/min and 0.21 cm/min, respectively. The infiltration processes can be divided into three stages: the rapidly changing stage (0-10 min), the slowly changing stage (10-30 min) and the stabilization stage (after 30 min). We used field data collected from lawn soils and evaluated the performances of the infiltration models of Philip, Kostiakov and Horton with the sum of squared error, the root mean square error, the coefficient of determination, the mean error, the model efficiency and Willmott's index of agreement. The results indicated that the Kostiakov model was most suitable for studying the infiltration process in the alpine lawn soils.
1987-12-01
DEMAND- OFQTSCEUS N(DMD,SD(DAY)) IPB ORDE SOLD NW LLOST IP=SCL TNW+ SUBROUTI NE QIPUT GE NE RAT ES PERFORMANCE SUMMARY SC L RPT I * TURNS OHP NIS OH SOLD...VAR I ABLE: I MVEMTORY-TO-SALES RAT 10 SOLRCE OF SUM OF SQUARES MEM SQARE F VALUE MlOOEL 17 5.54073711 O.32592574 83685.81 ERROR 72 0.00029041
Matching methods evaluation framework for stereoscopic breast x-ray images.
Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric
2016-01-01
Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.
Counting Triangles to Sum Squares
ERIC Educational Resources Information Center
DeMaio, Joe
2012-01-01
Counting complete subgraphs of three vertices in complete graphs, yields combinatorial arguments for identities for sums of squares of integers, odd integers, even integers and sums of the triangular numbers.
Optimal generalized multistep integration formulae for real-time digital simulation
NASA Technical Reports Server (NTRS)
Moerder, D. D.; Halyo, N.
1985-01-01
The problem of discretizing a dynamical system for real-time digital simulation is considered. Treating the system and its simulation as stochastic processes leads to a statistical characterization of simulator fidelity. A plant discretization procedure based on an efficient matrix generalization of explicit linear multistep discrete integration formulae is introduced, which minimizes a weighted sum of the mean squared steady-state and transient error between the system and simulator outputs.
A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning
ERIC Educational Resources Information Center
Brusco, Michael J.; Steinley, Douglas
2007-01-01
Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Multiple point least squares equalization in a room
NASA Technical Reports Server (NTRS)
Elliott, S. J.; Nelson, P. A.
1988-01-01
Equalization filters designed to minimize the mean square error between a delayed version of the original electrical signal and the equalized response at a point in a room have previously been investigated. In general, such a strategy degrades the response at positions in a room away from the equalization point. A method is presented for designing an equalization filter by adjusting the filter coefficients to minimize the sum of the squares of the errors between the equalized responses at multiple points in the room and delayed versions of the original, electrical signal. Such an equalization filter can give a more uniform frequency response over a greater volume of the enclosure than can the single point equalizer above. Computer simulation results are presented of equalizing the frequency responses from a loudspeaker to various typical ear positions, in a room with dimensions and acoustic damping typical of a car interior, using the two approaches outlined above. Adaptive filter algorithms, which can automatically adjust the coefficients of a digital equalization filter to achieve this minimization, will also be discussed.
{lambda} elements for singular problems in CFD: Viscoelastic fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.K.; Surana, K.S.
1996-10-01
This paper presents two dimensional {lambda} element formulation for viscoelastic fluid flow containing point singularities in the flow field. The flow of viscoelastic fluid even without singularities are a difficult class of problems for increasing Deborah number or Weissenburg number due to increased dominance of convective terms and thus increased hyperbolicity. In the present work the equations of fluid motion and the constitutive laws are recast in the form of a first order system of coupled equations with the use of auxiliary variables. The velocity, pressure and stresses are interpolated using equal order C{sup 0} {lambda} element approximations. The Leastmore » Squares Finite Element Method (LSFEM) is used to construct the integral form (error functional I) corresponding to these equations. The error functional is constructed by taking the integrated sum of the squares of the errors or residuals (over the whole discretization) resulting when the element approximation is substituted into these equations. The conditions resulting from the minimization of the error functional are satisfied by using Newton`s method with line search. LSFEM has much superior performance when dealing with non-linear and convection dominated problems.« less
A Scale-Independent Clustering Method with Automatic Variable Selection Based on Trees
2014-03-01
veterans fought. They then clustered the data and were able to identify three distinct post-combat syndromes associated with different eras...granting some legitimacy to proposed medical conditions such as the Gulf War Syndrome (Jones et al., 2002, pp. 321–324) D. MEASURING DISTANCES BETWEEN...chosen so as to minimize the sum of squared errors of the response across the two regions (Equation 2.1). The average y for the left and right child
A tight Cramér-Rao bound for joint parameter estimation with a pure two-mode squeezed probe
NASA Astrophysics Data System (ADS)
Bradshaw, Mark; Assad, Syed M.; Lam, Ping Koy
2017-08-01
We calculate the Holevo Cramér-Rao bound for estimation of the displacement experienced by one mode of an two-mode squeezed vacuum state with squeezing r and find that it is equal to 4 exp (- 2 r). This equals the sum of the mean squared error obtained from a dual homodyne measurement, indicating that the bound is tight and that the dual homodyne measurement is optimal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...
2018-01-31
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oboh, I., E-mail: innocentoboh@uniuyo.edu.ng; Aluyor, E.; Audu, T.
The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R{sup 2}), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used tomore » predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.« less
Kinetic modelling for zinc (II) ions biosorption onto Luffa cylindrica
NASA Astrophysics Data System (ADS)
Oboh, I.; Aluyor, E.; Audu, T.
2015-03-01
The biosorption of Zinc (II) ions onto a biomaterial - Luffa cylindrica has been studied. This biomaterial was characterized by elemental analysis, surface area, pore size distribution, scanning electron microscopy, and the biomaterial before and after sorption, was characterized by Fourier Transform Infra Red (FTIR) spectrometer. The kinetic nonlinear models fitted were Pseudo-first order, Pseudo-second order and Intra-particle diffusion. A comparison of non-linear regression method in selecting the kinetic model was made. Four error functions, namely coefficient of determination (R2), hybrid fractional error function (HYBRID), average relative error (ARE), and sum of the errors squared (ERRSQ), were used to predict the parameters of the kinetic models. The strength of this study is that a biomaterial with wide distribution particularly in the tropical world and which occurs as waste material could be put into effective utilization as a biosorbent to address a crucial environmental problem.
A comparison of methods for DPLL loop filter design
NASA Technical Reports Server (NTRS)
Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.
1986-01-01
Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.
Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.
Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models
Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick
2013-01-01
Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
Simulation of a long-term aquifer test conducted near the Rio Grande, Albuquerque, New Mexico
McAda, Douglas P.
2001-01-01
A long-term aquifer test was conducted near the Rio Grande in Albuquerque during January and February 1995 using 22 wells and piezometers at nine sites, with the City of Albuquerque Griegos 1 production well as the pumped well. Griegos 1 discharge averaged about 2,330 gallons per minute for 54.4 days. A three-dimensional finite-difference ground-water-flow model was used to estimate aquifer properties in the vicinity of the Griegos well field and the amount of infiltration induced into the aquifer system from the Rio Grande and riverside drains as a result of pumping during the test. The model was initially calibrated by trial-and-error adjustments of the aquifer properties. The model was recalibrated using a nonlinear least-squares regression technique. The aquifer system in the area includes the middle Tertiary to Quaternary Santa Fe Group and post-Santa Fe Group valley- and basin-fill deposits of the Albuquerque Basin. The Rio Grande and adjacent riverside drains are in hydraulic connection with the aquifer system. The hydraulic-conductivity values of the upper part of the Santa Fe Group resulting from the model calibrated by trial and error varied by zone in the model and ranged from 12 to 33 feet per day. The hydraulic conductivity of the inner-valley alluvium was 45 feet per day. The vertical to horizontal anisotropy ratio was 1:140. Specific storage was 4 x 10-6 per foot of aquifer thickness, and specific yield was 0.15 (dimensionless). The sum of squared errors between the observed and simulated drawdowns was 130 feet squared. Not all aquifer properties could be estimated using nonlinear regression because of model insensitivity to some aquifer properties at observation locations. Hydraulic conductivity of the inner-valley alluvium, middle part of the Santa Fe Group, and riverbed and riverside-drain bed and specific yield had low sensitivity values and therefore could not be estimated. Of the properties estimated, hydraulic conductivity of the upper part of the Santa Fe Group was estimated to be 12 feet per day, the vertical to horizontal anisotropy ratio was estimated to be 1:82, and specific storage was estimated to be 1.2 x 10-6 per foot of aquifer thickness. The overall sum of squared errors between the observed and simulated drawdowns was 87 feet squared, a significant improvement over the model calibrated by trial and error. At the end of aquifer-test pumping, induced infiltration from the Rio Grande and riverside drains was simulated to be 13 percent of the total amount of water pumped. The remainder was water removed from aquifer storage. After pumping stopped, induced infiltration continued to replenish aquifer storage. Simulations estimated that 5 years after pumping began (about 4.85 years after pumping stopped), 58 to 72 percent of the total amount of water pumped was replenished by induced infiltration from the Rio Grande surface-water system.
A class of optimum digital phase locked loops for the DSN advanced receiver
NASA Technical Reports Server (NTRS)
Hurd, W. J.; Kumar, R.
1985-01-01
A class of optimum digital filters for digital phase locked loop of the deep space network advanced receiver is discussed. The filter minimizes a weighted combination of the variance of the random component of the phase error and the sum square of the deterministic dynamic component of phase error at the output of the numerically controlled oscillator (NCO). By varying the weighting coefficient over a suitable range of values, a wide set of filters are obtained such that, for any specified value of the equivalent loop-noise bandwidth, there corresponds a unique filter in this class. This filter thus has the property of having the best transient response over all possible filters of the same bandwidth and type. The optimum filters are also evaluated in terms of their gain margin for stability and their steady-state error performance.
Nana, Roger; Hu, Xiaoping
2010-01-01
k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.
Positioning performance analysis of the time sum of arrival algorithm with error features
NASA Astrophysics Data System (ADS)
Gong, Feng-xun; Ma, Yan-qiu
2018-03-01
The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.
Two-UAV Intersection Localization System Based on the Airborne Optoelectronic Platform
Bai, Guanbing; Liu, Jinghong; Song, Yueming; Zuo, Yujia
2017-01-01
To address the limitation of the existing UAV (unmanned aerial vehicles) photoelectric localization method used for moving objects, this paper proposes an improved two-UAV intersection localization system based on airborne optoelectronic platforms by using the crossed-angle localization method of photoelectric theodolites for reference. This paper introduces the makeup and operating principle of intersection localization system, creates auxiliary coordinate systems, transforms the LOS (line of sight, from the UAV to the target) vectors into homogeneous coordinates, and establishes a two-UAV intersection localization model. In this paper, the influence of the positional relationship between UAVs and the target on localization accuracy has been studied in detail to obtain an ideal measuring position and the optimal localization position where the optimal intersection angle is 72.6318°. The result shows that, given the optimal position, the localization root mean square error (RMS) will be 25.0235 m when the target is 5 km away from UAV baselines. Finally, the influence of modified adaptive Kalman filtering on localization results is analyzed, and an appropriate filtering model is established to reduce the localization RMS error to 15.7983 m. Finally, An outfield experiment was carried out and obtained the optimal results: σB=1.63×10−4 (°), σL=1.35×10−4 (°), σH=15.8 (m), σsum=27.6 (m), where σB represents the longitude error, σL represents the latitude error, σH represents the altitude error, and σsum represents the error radius. PMID:28067814
Medium-range Performance of the Global NWP Model
NASA Astrophysics Data System (ADS)
Kim, J.; Jang, T.; Kim, J.; Kim, Y.
2017-12-01
The medium-range performance of the global numerical weather prediction (NWP) model in the Korea Meteorological Administration (KMA) is investigated. The performance is based on the prediction of the extratropical circulation. The mean square error is expressed by sum of spatial variance of discrepancy between forecasts and observations and the square of the mean error (ME). Thus, it is important to investigate the ME effect in order to understand the model performance. The ME is expressed by the subtraction of an anomaly from forecast difference against the real climatology. It is found that the global model suffers from a severe systematic ME in medium-range forecasts. The systematic ME is dominant in the entire troposphere in all months. Such ME can explain at most 25% of root mean square error. We also compare the extratropical ME distribution with that from other NWP centers. NWP models exhibit similar spatial ME structure each other. It is found that the spatial ME pattern is highly correlated to that of an anomaly, implying that the ME varies with seasons. For example, the correlation coefficient between ME and anomaly ranges from -0.51 to -0.85 by months. The pattern of the extratropical circulation also has a high correlation to an anomaly. The global model has trouble in faithfully simulating extratropical cyclones and blockings in the medium-range forecast. In particular, the model has a hard to simulate an anomalous event in medium-range forecasts. If we choose an anomalous period for a test-bed experiment, we will suffer from a large error due to an anomaly.
1975-02-28
max this peak , which varies substantially over Az = a A0 = 1 , r max we pick an angular increment Ae = T5— o 2 a max 2-72...22°, as into the main diffraction peak . This effect if. en- tirely missed by an equi"alent sphere model. The error incurred by assumption (7...minimize the sum of squares, we pick Q so that this expression is as negative as possible (if it is never negative for 0 ^ 0 £ 1, we are already
Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira
2015-12-18
For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
NASA Astrophysics Data System (ADS)
Dondurur, Derman; Sarı, Coşkun
2004-07-01
A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.
Luo, Xiongbiao; Mori, Kensaku
2014-06-01
Endoscope 3-D motion tracking, which seeks to synchronize pre- and intra-operative images in endoscopic interventions, is usually performed as video-volume registration that optimizes the similarity between endoscopic video and pre-operative images. The tracking performance, in turn, depends significantly on whether a similarity measure can successfully characterize the difference between video sequences and volume rendering images driven by pre-operative images. The paper proposes a discriminative structural similarity measure, which uses the degradation of structural information and takes image correlation or structure, luminance, and contrast into consideration, to boost video-volume registration. By applying the proposed similarity measure to endoscope tracking, it was demonstrated to be more accurate and robust than several available similarity measures, e.g., local normalized cross correlation, normalized mutual information, modified mean square error, or normalized sum squared difference. Based on clinical data evaluation, the tracking error was reduced significantly from at least 14.6 mm to 4.5 mm. The processing time was accelerated more than 30 frames per second using graphics processing unit.
Using Redundancy To Reduce Errors in Magnetometer Readings
NASA Technical Reports Server (NTRS)
Kulikov, Igor; Zak, Michail
2004-01-01
A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.
A Solution to Weighted Sums of Squares as a Square
ERIC Educational Resources Information Center
Withers, Christopher S.; Nadarajah, Saralees
2012-01-01
For n = 1, 2, ... , we give a solution (x[subscript 1], ... , x[subscript n], N) to the Diophantine integer equation [image omitted]. Our solution has N of the form n!, in contrast to other solutions in the literature that are extensions of Euler's solution for N, a sum of squares. More generally, for given n and given integer weights m[subscript…
Propagation of angular errors in two-axis rotation systems
NASA Astrophysics Data System (ADS)
Torrington, Geoffrey K.
2003-10-01
Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.
Ten years of preanalytical monitoring and control: Synthetic Balanced Score Card Indicator
López-Garrigós, Maite; Flores, Emilio; Santo-Quiles, Ana; Gutierrez, Mercedes; Lugo, Javier; Lillo, Rosa; Leiva-Salinas, Carlos
2015-01-01
Introduction Preanalytical control and monitoring continue to be an important issue for clinical laboratory professionals. The aim of the study was to evaluate a monitoring system of preanalytical errors regarding not suitable samples for analysis, based on different indicators; to compare such indicators in different phlebotomy centres; and finally to evaluate a single synthetic preanalytical indicator that may be included in the balanced scorecard management system (BSC). Materials and methods We collected individual and global preanalytical errors in haematology, coagulation, chemistry, and urine samples analysis. We also analyzed a synthetic indicator that represents the sum of all types of preanalytical errors, expressed in a sigma level. We studied the evolution of those indicators over time and compared indicator results by way of the comparison of proportions and Chi-square. Results There was a decrease in the number of errors along the years (P < 0.001). This pattern was confirmed in primary care patients, inpatients and outpatients. In blood samples, fewer errors occurred in outpatients, followed by inpatients. Conclusion We present a practical and effective methodology to monitor unsuitable sample preanalytical errors. The synthetic indicator results summarize overall preanalytical sample errors, and can be used as part of BSC management system. PMID:25672466
Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting
2018-01-21
Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.
Cooley, R.L.; Hill, M.C.
1992-01-01
Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.
Documentation of a spreadsheet for time-series analysis and drawdown estimation
Halford, Keith J.
2006-01-01
Drawdowns during aquifer tests can be obscured by barometric pressure changes, earth tides, regional pumping, and recharge events in the water-level record. These stresses can create water-level fluctuations that should be removed from observed water levels prior to estimating drawdowns. Simple models have been developed for estimating unpumped water levels during aquifer tests that are referred to as synthetic water levels. These models sum multiple time series such as barometric pressure, tidal potential, and background water levels to simulate non-pumping water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function. Root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods. The proposed drawdown estimation approach has been implemented in a spreadsheet application. Measured time series are independent so that collection frequencies can differ and sampling times can be asynchronous. Time series can be viewed selectively and magnified easily. Fitting and prediction periods can be defined graphically or entered directly. Synthetic water levels for each observation well are created with earth tides, measured time series, moving averages of time series, and differences between measured and moving averages of time series. Selected series and fitting parameters for synthetic water levels are stored and drawdowns are estimated for prediction periods. Drawdowns can be viewed independently and adjusted visually if an anomaly skews initial drawdowns away from 0. The number of observations in a drawdown time series can be reduced by averaging across user-defined periods. Raw or reduced drawdown estimates can be copied from the spreadsheet application or written to tab-delimited ASCII files.
Aerodynamic influence coefficient method using singularity splines.
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Weber, J. A.; Lesferd, E. P.
1973-01-01
A new numerical formulation with computed results, is presented. This formulation combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of the loading function methods. The formulation employs a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfies all of the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise (termed 'spline'). Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral.
Photometric theory for wide-angle phenomena
NASA Technical Reports Server (NTRS)
Usher, Peter D.
1990-01-01
An examination is made of the problem posed by wide-angle photographic photometry, in order to extract a photometric-morphological history of Comet P/Halley. Photometric solutions are presently achieved over wide angles through a generalization of an assumption-free moment-sum method. Standard stars in the field allow a complete solution to be obtained for extinction, sky brightness, and the characteristic curve. After formulating Newton's method for the solution of the general nonlinear least-square problem, an implementation is undertaken for a canonical data set. Attention is given to the problem of random and systematic photometric errors.
Ultrasonic measurements of the reflection coefficient at a water/polyurethane foam interface.
Sagers, Jason D; Haberman, Michael R; Wilson, Preston S
2013-09-01
Measured ultrasonic reflection coefficients as a function of normal incidence angle are reported for several samples of polyurethane foam submerged in a water bath. Three reflection coefficient models are employed as needed in this analysis to approximate the measured data: (1) an infinite plane wave impinging on an elastic halfspace, (2) an infinite plane wave impinging on a single fluid layer overlying a fluid halfspace, and (3) a finite acoustic beam impinging on an elastic halfspace. The compressional wave speed in each sample is calculated by minimizing the sum of squared error (SSE) between the measured and modeled data.
Influential Nonegligible Parameters under the Search Linear Model.
1986-04-25
lack of fit as wi 2 SSL0F(1 ) - I n u~ -(M) (12) and the sum of squares due to pure error as SSPE - I I (Y V-2 (13) For I 1,.,2) we define F(i) SSL0F...SSE (I) Noting that the numerator on the RHS of the above expression does not depend on i, we get the equivalence of (a) and (b). Again, SSE ) SSPE ...SSLOFM I and SSPE does not depend on i. Therefore (a) and (c) are equivalent. - From (14), the equivalence of (c) and (d) is clear. From (3), (6
A GPU-based symmetric non-rigid image registration method in human lung.
Haghighi, Babak; D Ellingwood, Nathan; Yin, Youbing; Hoffman, Eric A; Lin, Ching-Long
2018-03-01
Quantitative computed tomography (QCT) of the lungs plays an increasing role in identifying sub-phenotypes of pathologies previously lumped into broad categories such as chronic obstructive pulmonary disease and asthma. Methods for image matching and linking multiple lung volumes have proven useful in linking structure to function and in the identification of regional longitudinal changes. Here, we seek to improve the accuracy of image matching via the use of a symmetric multi-level non-rigid registration employing an inverse consistent (IC) transformation whereby images are registered both in the forward and reverse directions. To develop the symmetric method, two similarity measures, the sum of squared intensity difference (SSD) and the sum of squared tissue volume difference (SSTVD), were used. The method is based on a novel generic mathematical framework to include forward and backward transformations, simultaneously, eliminating the need to compute the inverse transformation. Two implementations were used to assess the proposed method: a two-dimensional (2-D) implementation using synthetic examples with SSD, and a multi-core CPU and graphics processing unit (GPU) implementation with SSTVD for three-dimensional (3-D) human lung datasets (six normal adults studied at total lung capacity (TLC) and functional residual capacity (FRC)). Success was evaluated in terms of the IC transformation consistency serving to link TLC to FRC. 2-D registration on synthetic images, using both symmetric and non-symmetric SSD methods, and comparison of displacement fields showed that the symmetric method gave a symmetrical grid shape and reduced IC errors, with the mean values of IC errors decreased by 37%. Results for both symmetric and non-symmetric transformations of human datasets showed that the symmetric method gave better results for IC errors in all cases, with mean values of IC errors for the symmetric method lower than the non-symmetric methods using both SSD and SSTVD. The GPU version demonstrated an average of 43 times speedup and ~5.2 times speedup over the single-threaded and 12-threaded CPU versions, respectively. Run times with the GPU were as fast as 2 min. The symmetric method improved the inverse consistency, aiding the use of image registration in the QCT-based evaluation of the lung.
Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty
Lu, Yang; Loizou, Philipos C.
2011-01-01
Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543
Polynomial fuzzy observer designs: a sum-of-squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O
2012-10-01
This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.
Khan, Waseem S; Hamadneh, Nawaf N; Khan, Waqar A
2017-01-01
In this study, multilayer perception neural network (MLPNN) was employed to predict thermal conductivity of PVP electrospun nanocomposite fibers with multiwalled carbon nanotubes (MWCNTs) and Nickel Zinc ferrites [(Ni0.6Zn0.4) Fe2O4]. This is the second attempt on the application of MLPNN with prey predator algorithm for the prediction of thermal conductivity of PVP electrospun nanocomposite fibers. The prey predator algorithm was used to train the neural networks to find the best models. The best models have the minimal of sum squared error between the experimental testing data and the corresponding models results. The minimal error was found to be 0.0028 for MWCNTs model and 0.00199 for Ni-Zn ferrites model. The predicted artificial neural networks (ANNs) responses were analyzed statistically using z-test, correlation coefficient, and the error functions for both inclusions. The predicted ANN responses for PVP electrospun nanocomposite fibers were compared with the experimental data and were found in good agreement.
Using Disks as Models for Proofs of Series
ERIC Educational Resources Information Center
Somchaipeng, Tongta; Kruatong, Tussatrin; Panijpan, Bhinyo
2012-01-01
Exploring and deriving proofs of closed-form expressions for series can be fun for students. However, for some students, a physical representation of such problems is more meaningful. Various approaches have been designed to help students visualize squares of sums and sums of squares; these approaches may be arithmetic-algebraic or combinatorial…
Milne, S C
1996-12-24
In this paper, we give two infinite families of explicit exact formulas that generalize Jacobi's (1829) 4 and 8 squares identities to 4n(2) or 4n(n + 1) squares, respectively, without using cusp forms. Our 24 squares identity leads to a different formula for Ramanujan's tau function tau(n), when n is odd. These results arise in the setting of Jacobi elliptic functions, Jacobi continued fractions, Hankel or Turánian determinants, Fourier series, Lambert series, inclusion/exclusion, Laplace expansion formula for determinants, and Schur functions. We have also obtained many additional infinite families of identities in this same setting that are analogous to the eta-function identities in appendix I of Macdonald's work [Macdonald, I. G. (1972) Invent. Math. 15, 91-143]. A special case of our methods yields a proof of the two conjectured [Kac, V. G. and Wakimoto, M. (1994) in Progress in Mathematics, eds. Brylinski, J.-L., Brylinski, R., Guillemin, V. & Kac, V. (Birkhäuser Boston, Boston, MA), Vol. 123, pp. 415-456] identities involving representing a positive integer by sums of 4n(2) or 4n(n + 1) triangular numbers, respectively. Our 16 and 24 squares identities were originally obtained via multiple basic hypergeometric series, Gustafson's C(l) nonterminating (6)phi(5) summation theorem, and Andrews' basic hypergeometric series proof of Jacobi's 4 and 8 squares identities. We have (elsewhere) applied symmetry and Schur function techniques to this original approach to prove the existence of similar infinite families of sums of squares identities for n(2) or n(n + 1) squares, respectively. Our sums of more than 8 squares identities are not the same as the formulas of Mathews (1895), Glaisher (1907), Ramanujan (1916), Mordell (1917, 1919), Hardy (1918, 1920), Kac and Wakimoto, and many others.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. PMID:25438826
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, Alyssa J. R.; Ghale, Kushal; Rieg, Carolin
In recent years, the popularity of density functional theory with periodic boundary conditions (DFT) has surged for the design and optimization of functional materials. However, no single DFT exchange–correlation functional currently available gives accurate adsorption energies on transition metals both when bonding to the surface is dominated by strong covalent or ionic bonding and when it has strong contributions from van der Waals interactions (i.e., dispersion forces). Here we present a new, simple method for accurately predicting adsorption energies on transition-metal surfaces based on DFT calculations, using an adaptively weighted sum of energies from RPBE and optB86b-vdW (or optB88-vdW) densitymore » functionals. This method has been benchmarked against a set of 39 reliable experimental energies for adsorption reactions. Our results show that this method has a mean absolute error and root mean squared error relative to experiments of 13.4 and 19.3 kJ/mol, respectively, compared to 20.4 and 26.4 kJ/mol for the BEEF-vdW functional. For systems with large van der Waals contributions, this method decreases these errors to 11.6 and 17.5 kJ/mol. Furthermore, this method provides predictions of adsorption energies both for processes dominated by strong covalent or ionic bonding and for those dominated by dispersion forces that are more accurate than those of any current standard DFT functional alone.« less
Hensley, Alyssa J. R.; Ghale, Kushal; Rieg, Carolin; ...
2017-01-26
In recent years, the popularity of density functional theory with periodic boundary conditions (DFT) has surged for the design and optimization of functional materials. However, no single DFT exchange–correlation functional currently available gives accurate adsorption energies on transition metals both when bonding to the surface is dominated by strong covalent or ionic bonding and when it has strong contributions from van der Waals interactions (i.e., dispersion forces). Here we present a new, simple method for accurately predicting adsorption energies on transition-metal surfaces based on DFT calculations, using an adaptively weighted sum of energies from RPBE and optB86b-vdW (or optB88-vdW) densitymore » functionals. This method has been benchmarked against a set of 39 reliable experimental energies for adsorption reactions. Our results show that this method has a mean absolute error and root mean squared error relative to experiments of 13.4 and 19.3 kJ/mol, respectively, compared to 20.4 and 26.4 kJ/mol for the BEEF-vdW functional. For systems with large van der Waals contributions, this method decreases these errors to 11.6 and 17.5 kJ/mol. Furthermore, this method provides predictions of adsorption energies both for processes dominated by strong covalent or ionic bonding and for those dominated by dispersion forces that are more accurate than those of any current standard DFT functional alone.« less
Quantum Kronecker sum-product low-density parity-check codes with finite rate
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Pryadko, Leonid P.
2013-07-01
We introduce an ansatz for quantum codes which gives the hypergraph-product (generalized toric) codes by Tillich and Zémor and generalized bicycle codes by MacKay as limiting cases. The construction allows for both the lower and the upper bounds on the minimum distance; they scale as a square root of the block length. Many thus defined codes have a finite rate and limited-weight stabilizer generators, an analog of classical low-density parity-check (LDPC) codes. Compared to the hypergraph-product codes, hyperbicycle codes generally have a wider range of parameters; in particular, they can have a higher rate while preserving the estimated error threshold.
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting
Ming-jun, Deng; Shi-ru, Qu
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting. PMID:26779258
Numerical Analysis of Modeling Based on Improved Elman Neural Network
Jie, Shao
2014-01-01
A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting.
Deng, Ming-jun; Qu, Shi-ru
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting.
Sum of top-hat transform based algorithm for vessel enhancement in MRA images
NASA Astrophysics Data System (ADS)
Ouazaa, Hibet-Allah; Jlassi, Hajer; Hamrouni, Kamel
2018-04-01
The Magnetic Resonance Angiography (MRA) is rich with information's. But, they suffer from poor contrast, illumination and noise. Thus, it is required to enhance the images. But, these significant information can be lost if improper techniques are applied. Therefore, in this paper, we propose a new method of enhancement. We applied firstly the CLAHE method to increase the contrast of the image. Then, we applied the sum of Top-Hat Transform to increase the brightness of vessels. It is performed with the structuring element oriented in different angles. The methodology is tested and evaluated on the publicly available database BRAINIX. And, we used the measurement methods MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio) and SNR (Signal to Noise Ratio) for the evaluation. The results demonstrate that the proposed method could efficiently enhance the image details and is comparable with state of the art algorithms. Hence, the proposed method could be broadly used in various applications.
ERIC Educational Resources Information Center
Wills, Herbert III
1989-01-01
Describes ways to make magic squares of 4 by 4 matrices. Presents two handouts: (1) Sets of 4 Numbers from 1 to 16 Whose Sum is 34; and (2) The Durer Square. Shows patterns which appeared in the magic squares, such as squares, chevrons, rhomboids, and trapezoids. (YP)
Testing the Concept of Quark-Hadron Duality with the ALEPH τ Decay Data
NASA Astrophysics Data System (ADS)
Magradze, B. A.
2010-12-01
We propose a modified procedure for extracting the numerical value for the strong coupling constant α s from the τ lepton hadronic decay rate into non-strange particles in the vector channel. We employ the concept of the quark-hadron duality specifically, introducing a boundary energy squared s p > 0, the onset of the perturbative QCD continuum in Minkowski space (Bertlmann et al. in Nucl Phys B 250:61, 1985; de Rafael in An introduction to sum rules in QCD. In: Lectures at the Les Houches Summer School. arXiv: 9802448 [hep-ph], 1997; Peris et al. in JHEP 9805:011, 1998). To approximate the hadronic spectral function in the region s > s p, we use analytic perturbation theory (APT) up to the fifth order. A new feature of our procedure is that it enables us to extract from the data simultaneously the QCD scale parameter {Λ_{overlineMS}} and the boundary energy squared s p. We carefully determine the experimental errors on these parameters which come from the errors on the invariant mass squared distribution. For the {overlineMS} scheme coupling constant, we obtain {α_s(m2_{tau})=0.3204± 0.0159_{exp.}}. We show that our numerical analysis is much more stable against higher-order corrections than the standard one. Additionally, we recalculate the “experimental” Adler function in the infrared region using final ALEPH results. The uncertainty on this function is also determined.
Analysis of phase error effects in multishot diffusion-prepared turbo spin echo imaging
Cervantes, Barbara; Kooijman, Hendrik; Karampinos, Dimitrios C.
2017-01-01
Background To characterize the effect of phase errors on the magnitude and the phase of the diffusion-weighted (DW) signal acquired with diffusion-prepared turbo spin echo (dprep-TSE) sequences. Methods Motion and eddy currents were identified as the main sources of phase errors. An analytical expression for the effect of phase errors on the acquired signal was derived and verified using Bloch simulations, phantom, and in vivo experiments. Results Simulations and experiments showed that phase errors during the diffusion preparation cause both magnitude and phase modulation on the acquired data. When motion-induced phase error (MiPe) is accounted for (e.g., with motion-compensated diffusion encoding), the signal magnitude modulation due to the leftover eddy-current-induced phase error cannot be eliminated by the conventional phase cycling and sum-of-squares (SOS) method. By employing magnitude stabilizers, the phase-error-induced magnitude modulation, regardless of its cause, was removed but the phase modulation remained. The in vivo comparison between pulsed gradient and flow-compensated diffusion preparations showed that MiPe needed to be addressed in multi-shot dprep-TSE acquisitions employing magnitude stabilizers. Conclusions A comprehensive analysis of phase errors in dprep-TSE sequences showed that magnitude stabilizers are mandatory in removing the phase error induced magnitude modulation. Additionally, when multi-shot dprep-TSE is employed the inconsistent signal phase modulation across shots has to be resolved before shot-combination is performed. PMID:28516049
ERIC Educational Resources Information Center
Grima, Pere; Marco, Lluis
2008-01-01
This note presents two demonstrations of the known formula for the sum of squares of the first n natural numbers. One demonstration is based on geometrical considerations and the other one uses elementary integral calculus. Both demonstrations are very easy to understand, even for high school students, and may be good examples of how to explore…
NASA Astrophysics Data System (ADS)
Damanik, Asan
2018-03-01
Neutrino mass sum-rele is a very important research subject from theoretical side because neutrino oscillation experiment only gave us two squared-mass differences and three mixing angles. We review neutrino mass sum-rule in literature that have been reported by many authors and discuss its phenomenological implications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.
The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less
Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.; ...
2017-01-30
The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less
Zhang, You; Ma, Jianhua; Iyengar, Puneeth; Zhong, Yuncheng; Wang, Jing
2017-01-01
Purpose Sequential same-patient CT images may involve deformation-induced and non-deformation-induced voxel intensity changes. An adaptive deformation recovery and intensity correction (ADRIC) technique was developed to improve the CT reconstruction accuracy, and to separate deformation from non-deformation-induced voxel intensity changes between sequential CT images. Materials and Methods ADRIC views the new CT volume as a deformation of a prior high-quality CT volume, but with additional non-deformation-induced voxel intensity changes. ADRIC first applies the 2D-3D deformation technique to recover the deformation field between the prior CT volume and the new, to-be-reconstructed CT volume. Using the deformation-recovered new CT volume, ADRIC further corrects the non-deformation-induced voxel intensity changes with an updated algebraic reconstruction technique (‘ART-dTV’). The resulting intensity-corrected new CT volume is subsequently fed back into the 2D-3D deformation process to further correct the residual deformation errors, which forms an iterative loop. By ADRIC, the deformation field and the non-deformation voxel intensity corrections are optimized separately and alternately to reconstruct the final CT. CT myocardial perfusion imaging scenarios were employed to evaluate the efficacy of ADRIC, using both simulated data of the extended-cardiac-torso (XCAT) digital phantom and experimentally acquired porcine data. The reconstruction accuracy of the ADRIC technique was compared to the technique using ART-dTV alone, and to the technique using 2D-3D deformation alone. The relative error metric and the universal quality index metric are calculated between the images for quantitative analysis. The relative error is defined as the square root of the sum of squared voxel intensity differences between the reconstructed volume and the ‘ground-truth’ volume, normalized by the square root of the sum of squared ‘ground-truth’ voxel intensities. In addition to the XCAT and porcine studies, a physical lung phantom measurement study was also conducted. Water-filled balloons with various shapes/volumes and concentrations of iodinated contrasts were put inside the phantom to simulate both deformations and non-deformation-induced intensity changes for ADRIC reconstruction. The ADRIC-solved deformations and intensity changes from limited-view projections were compared to those of the ‘gold-standard’ volumes reconstructed from fully-sampled projections. Results For the XCAT simulation study, the relative errors of the reconstructed CT volume by the 2D-3D deformation technique, the ART-dTV technique and the ADRIC technique were 14.64%, 19.21% and 11.90% respectively, by using 20 projections for reconstruction. Using 60 projections for reconstruction reduced the relative errors to 12.33%, 11.04% and 7.92% for the three techniques, respectively. For the porcine study, the corresponding results were 13.61%, 8.78%, 6.80% by using 20 projections; and 12.14%, 6.91% and 5.29% by using 60 projections. The ADRIC technique also demonstrated robustness to varying projection exposure levels. For the physical phantom study, the average DICE coefficient between the initial prior balloon volume and the new ‘gold-standard’ balloon volumes was 0.460. ADRIC reconstruction by 21 projections increased the average DICE coefficient to 0.954. Conclusion The ADRIC technique outperformed both the 2D-3D deformation technique and the ART-dTV technique in reconstruction accuracy. The alternately solved deformation field and non-deformation voxel intensity corrections can benefit multiple clinical applications, including tumor tracking, radiotherapy dose accumulation and treatment outcome analysis. PMID:28380247
Mathematical Construction of Magic Squares Utilizing Base-N Arithmetic
ERIC Educational Resources Information Center
O'Brien, Thomas D.
2006-01-01
Magic squares have been of interest as a source of recreation for over 4,500 years. A magic square consists of a square array of n[squared] positive and distinct integers arranged so that the sum of any column, row, or main diagonal is the same. In particular, an array of consecutive integers from 1 to n[squared] forming an nxn magic square is…
Oblinsky, Daniel G; Vanschouwen, Bryan M B; Gordon, Heather L; Rothstein, Stuart M
2009-12-14
Given the principal component analysis (PCA) of a molecular dynamics (MD) conformational trajectory for a model protein, we perform orthogonal Procrustean rotation to "best fit" the PCA squared-loading matrix to that of a target matrix computed for a related but different molecular system. The sum of squared deviations of the elements of the rotated matrix from those of the target, known as the error of fit (EOF), provides a quantitative measure of the dissimilarity between the two conformational samples. To estimate precision of the EOF, we perform bootstrap resampling of the molecular conformations within the trajectories, generating a distribution of EOF values for the system and target. The average EOF per variable is determined and visualized to ascertain where, locally, system and target sample properties differ. We illustrate this approach by analyzing MD trajectories for the wild-type and four selected mutants of the beta1 domain of protein G.
NASA Astrophysics Data System (ADS)
Oblinsky, Daniel G.; VanSchouwen, Bryan M. B.; Gordon, Heather L.; Rothstein, Stuart M.
2009-12-01
Given the principal component analysis (PCA) of a molecular dynamics (MD) conformational trajectory for a model protein, we perform orthogonal Procrustean rotation to "best fit" the PCA squared-loading matrix to that of a target matrix computed for a related but different molecular system. The sum of squared deviations of the elements of the rotated matrix from those of the target, known as the error of fit (EOF), provides a quantitative measure of the dissimilarity between the two conformational samples. To estimate precision of the EOF, we perform bootstrap resampling of the molecular conformations within the trajectories, generating a distribution of EOF values for the system and target. The average EOF per variable is determined and visualized to ascertain where, locally, system and target sample properties differ. We illustrate this approach by analyzing MD trajectories for the wild-type and four selected mutants of the β1 domain of protein G.
Measures of precision for dissimilarity-based multivariate analysis of ecological communities.
Anderson, Marti J; Santana-Garcon, Julia
2015-01-01
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity-based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity-based standard error (MultSE) as a useful quantity for assessing sample-size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided. © 2014 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.
Using absolute gravimeter data to determine vertical gravity gradients
Robertson, D.S.
2001-01-01
The position versus time data from a free-fall absolute gravimeter can be used to estimate the vertical gravity gradient in addition to the gravity value itself. Hipkin has reported success in estimating the vertical gradient value using a data set of unusually good quality. This paper explores techniques that may be applicable to a broader class of data that may be contaminated with "system response" errors of larger magnitude than were evident in the data used by Hipkin. This system response function is usually modelled as a sum of exponentially decaying sinusoidal components. The technique employed here involves combining the x0, v0 and g parameters from all the drops made during a site occupation into a single least-squares solution, and including the value of the vertical gradient and the coefficients of system response function in the same solution. The resulting non-linear equations must be solved iteratively and convergence presents some difficulties. Sparse matrix techniques are used to make the least-squares problem computationally tractable.
A Generalization of the Doubling Construction for Sums of Squares Identities
NASA Astrophysics Data System (ADS)
Zhang, Chi; Huang, Hua-Lin
2017-08-01
The doubling construction is a fast and important way to generate new solutions to the Hurwitz problem on sums of squares identities from any known ones. In this short note, we generalize the doubling construction and obtain from any given admissible triple [r,s,n] a series of new ones [r+ρ(2^{m-1}),2^ms,2^mn] for all positive integer m, where ρ is the Hurwitz-Radon function.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
Gu, Xinzhe; Wang, Zhenjie; Huang, Yangmin; Wei, Yingying; Zhang, Miaomiao; Tu, Kang
2015-01-01
This research aimed to develop a rapid and nondestructive method to model the growth and discrimination of spoilage fungi, like Botrytis cinerea, Rhizopus stolonifer and Colletotrichum acutatum, based on hyperspectral imaging system (HIS). A hyperspectral imaging system was used to measure the spectral response of fungi inoculated on potato dextrose agar plates and stored at 28°C and 85% RH. The fungi were analyzed every 12 h over two days during growth, and optimal simulation models were built based on HIS parameters. The results showed that the coefficients of determination (R2) of simulation models for testing datasets were 0.7223 to 0.9914, and the sum square error (SSE) and root mean square error (RMSE) were in a range of 2.03–53.40×10−4 and 0.011–0.756, respectively. The correlation coefficients between the HIS parameters and colony forming units of fungi were high from 0.887 to 0.957. In addition, fungi species was discriminated by partial least squares discrimination analysis (PLSDA), with the classification accuracy of 97.5% for the test dataset at 36 h. The application of this method in real food has been addressed through the analysis of Botrytis cinerea, Rhizopus stolonifer and Colletotrichum acutatum inoculated in peaches, demonstrating that the HIS technique was effective for simulation of fungal infection in real food. This paper supplied a new technique and useful information for further study into modeling the growth of fungi and detecting fruit spoilage caused by fungi based on HIS. PMID:26642054
Coherence analysis of a class of weighted networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; He, Jiaojiao; Zong, Yue; Ju, Tingting; Sun, Yu; Su, Weiyi
2018-04-01
This paper investigates consensus dynamics in a dynamical system with additive stochastic disturbances that is characterized as network coherence by using the Laplacian spectrum. We introduce a class of weighted networks based on a complete graph and investigate the first- and second-order network coherence quantifying as the sum and square sum of reciprocals of all nonzero Laplacian eigenvalues. First, the recursive relationship of its eigenvalues at two successive generations of Laplacian matrix is deduced. Then, we compute the sum and square sum of reciprocal of all nonzero Laplacian eigenvalues. The obtained results show that the scalings of first- and second-order coherence with network size obey four and five laws, respectively, along with the range of the weight factor. Finally, it indicates that the scalings of our studied networks are smaller than other studied networks when 1/√{d }
NASA Technical Reports Server (NTRS)
Berk, A.; Temkin, A.
1985-01-01
A sum rule is derived for the auxiliary eigenvalues of an equation whose eigenspectrum pertains to projection operators which describe electron scattering from multielectron atoms and ions. The sum rule's right-hand side depends on an integral involving the target system eigenfunctions. The sum rule is checked for several approximations of the two-electron target. It is shown that target functions which have a unit eigenvalue in their auxiliary eigenspectrum do not give rise to well-defined projection operators except through a limiting process. For Hylleraas target approximations, the auxiliary equations are shown to contain an infinite spectrum. However, using a Rayleigh-Ritz variational principle, it is shown that a comparatively simple aproximation can exhaust the sum rule to better than five significant figures. The auxiliary Hylleraas equation is greatly simplified by conversion to a square root equation containing the same eigenfunction spectrum and from which the required eigenvalues are trivially recovered by squaring.
Approximate N-Player Nonzero-Sum Game Solution for an Uncertain Continuous Nonlinear System.
Johnson, Marcus; Kamalapurkar, Rushikesh; Bhasin, Shubhendu; Dixon, Warren E
2015-08-01
An approximate online equilibrium solution is developed for an N -player nonzero-sum game subject to continuous-time nonlinear unknown dynamics and an infinite horizon quadratic cost. A novel actor-critic-identifier structure is used, wherein a robust dynamic neural network is used to asymptotically identify the uncertain system with additive disturbances, and a set of critic and actor NNs are used to approximate the value functions and equilibrium policies, respectively. The weight update laws for the actor neural networks (NNs) are generated using a gradient-descent method, and the critic NNs are generated by least square regression, which are both based on the modified Bellman error that is independent of the system dynamics. A Lyapunov-based stability analysis shows that uniformly ultimately bounded tracking is achieved, and a convergence analysis demonstrates that the approximate control policies converge to a neighborhood of the optimal solutions. The actor, critic, and identifier structures are implemented in real time continuously and simultaneously. Simulations on two and three player games illustrate the performance of the developed method.
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-01-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level. PMID:3580488
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-04-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level.
Texture metric that predicts target detection performance
NASA Astrophysics Data System (ADS)
Culpepper, Joanne B.
2015-12-01
Two texture metrics based on gray level co-occurrence error (GLCE) are used to predict probability of detection and mean search time. The two texture metrics are local clutter metrics and are based on the statistics of GLCE probability distributions. The degree of correlation between various clutter metrics and the target detection performance of the nine military vehicles in complex natural scenes found in the Search_2 dataset are presented. Comparison is also made between four other common clutter metrics found in the literature: root sum of squares, Doyle, statistical variance, and target structure similarity. The experimental results show that the GLCE energy metric is a better predictor of target detection performance when searching for targets in natural scenes than the other clutter metrics studied.
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
Zhang, Shengzhao; Li, Gang; Wang, Jiexi; Wang, Donggen; Han, Ying; Cao, Hui; Lin, Ling; Diao, Chunhong
2017-10-01
When an optical chopper is used to modulate the light source, the rotating speed of the wheel may vary with time and subsequently cause jitter of the modulation frequency. The amplitude calculated from the modulated signal would be distorted when the frequency fluctuations occur. To precisely calculate the amplitude of the modulated light flux, we proposed a method to estimate the range of the frequency fluctuation in the measurement of the spectrum and then extract the amplitude based on the sum of power of the signal in the selected frequency range. Experiments were designed to test the feasibility of the proposed method and the results showed lower root means square error than the conventional way.
A note on the Drazin indices of square matrices.
Yu, Lijun; Bu, Tianyi; Zhou, Jiang
2014-01-01
For a square matrix A, the smallest nonnegative integer k such that rank (A(k)) =rank (A(k+1)) is called the Drazin index of A. In this paper, we give some results on the Drazin indices of sum and product of square matrices.
NASA Astrophysics Data System (ADS)
Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang
2018-01-01
Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.
Four new topological indices based on the molecular path code.
Balaban, Alexandru T; Beteringhe, Adrian; Constantinescu, Titus; Filip, Petru A; Ivanciuc, Ovidiu
2007-01-01
The sequence of all paths pi of lengths i = 1 to the maximum possible length in a hydrogen-depleted molecular graph (which sequence is also called the molecular path code) contains significant information on the molecular topology, and as such it is a reasonable choice to be selected as the basis of topological indices (TIs). Four new (or five partly new) TIs with progressively improved performance (judged by correctly reflecting branching, centricity, and cyclicity of graphs, ordering of alkanes, and low degeneracy) have been explored. (i) By summing the squares of all numbers in the sequence one obtains Sigmaipi(2), and by dividing this sum by one plus the cyclomatic number, a Quadratic TI is obtained: Q = Sigmaipi(2)/(mu+1). (ii) On summing the Square roots of all numbers in the sequence one obtains Sigmaipi(1/2), and by dividing this sum by one plus the cyclomatic number, the TI denoted by S is obtained: S = Sigmaipi(1/2)/(mu+1). (iii) On dividing terms in this sum by the corresponding topological distances, one obtains the Distance-reduced index D = Sigmai{pi(1/2)/[i(mu+1)]}. Two similar formulas define the next two indices, the first one with no square roots: (iv) distance-Attenuated index: A = Sigmai{pi/[i(mu + 1)]}; and (v) the last TI with two square roots: Path-count index: P = Sigmai{pi(1/2)/[i(1/2)(mu + 1)]}. These five TIs are compared for their degeneracy, ordering of alkanes, and performance in QSPR (for all alkanes with 3-12 carbon atoms and for all possible chemical cyclic or acyclic graphs with 4-6 carbon atoms) in correlations with six physical properties and one chemical property.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Deukwoo; Little, Mark P.; Miller, Donald L.
Purpose: To determine more accurate regression formulas for estimating peak skin dose (PSD) from reference air kerma (RAK) or kerma-area product (KAP). Methods: After grouping of the data from 21 procedures into 13 clinically similar groups, assessments were made of optimal clustering using the Bayesian information criterion to obtain the optimal linear regressions of (log-transformed) PSD vs RAK, PSD vs KAP, and PSD vs RAK and KAP. Results: Three clusters of clinical groups were optimal in regression of PSD vs RAK, seven clusters of clinical groups were optimal in regression of PSD vs KAP, and six clusters of clinical groupsmore » were optimal in regression of PSD vs RAK and KAP. Prediction of PSD using both RAK and KAP is significantly better than prediction of PSD with either RAK or KAP alone. The regression of PSD vs RAK provided better predictions of PSD than the regression of PSD vs KAP. The partial-pooling (clustered) method yields smaller mean squared errors compared with the complete-pooling method.Conclusion: PSD distributions for interventional radiology procedures are log-normal. Estimates of PSD derived from RAK and KAP jointly are most accurate, followed closely by estimates derived from RAK alone. Estimates of PSD derived from KAP alone are the least accurate. Using a stochastic search approach, it is possible to cluster together certain dissimilar types of procedures to minimize the total error sum of squares.« less
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
Loaiza-Echeverri, A M; Bergmann, J A G; Toral, F L B; Osorio, J P; Carmo, A S; Mendonça, L F; Moustacas, V S; Henry, M
2013-03-15
The objective was to use various nonlinear models to describe scrotal circumference (SC) growth in Guzerat bulls on three farms in the state of Minas Gerais, Brazil. The nonlinear models were: Brody, Logistic, Gompertz, Richards, Von Bertalanffy, and Tanaka, where parameter A is the estimated testis size at maturity, B is the integration constant, k is a maturating index and, for the Richards and Tanaka models, m determines the inflection point. In Tanaka, A is an indefinite size of the testis, and B and k adjust the shape and inclination of the curve. A total of 7410 SC records were obtained every 3 months from 1034 bulls with ages varying between 2 and 69 months (<240 days of age = 159; 241-365 days = 451; 366-550 days = 1443; 551-730 days = 1705; and >731 days = 3652 SC measurements). Goodness of fit was evaluated by coefficients of determination (R(2)), error sum of squares, average prediction error (APE), and mean absolute deviation. The Richards model did not reach the convergence criterion. The R(2) were similar for all models (0.68-0.69). The error sum of squares was lowest for the Tanaka model. All models fit the SC data poorly in the early and late periods. Logistic was the model which best estimated SC in the early phase (based on APE and mean absolute deviation). The Tanaka and Logistic models had the lowest APE between 300 and 1600 days of age. The Logistic model was chosen for analysis of the environmental influence on parameters A and k. Based on absolute growth rate, SC increased from 0.019 cm/d, peaking at 0.025 cm/d between 318 and 435 days of age. Farm, year, and season of birth significantly affected size of adult SC and SC growth rate. An increase in SC adult size (parameter A) was accompanied by decreased SC growth rate (parameter k). In conclusion, SC growth in Guzerat bulls was characterized by an accelerated growth phase, followed by decreased growth; this was best represented by the Logistic model. The inflection point occurred at approximately 376 days of age (mean SC of 17.9 cm). We inferred that early selection of testicular size might result in smaller testes at maturity. Copyright © 2013 Elsevier Inc. All rights reserved.
Polar and singular value decomposition of 3×3 magic squares
NASA Astrophysics Data System (ADS)
Trenkler, Götz; Schmidt, Karsten; Trenkler, Dietrich
2013-07-01
In this note, we find polar as well as singular value decompositions of a 3×3 magic square, i.e. a 3×3 matrix M with real elements where each row, column and diagonal adds up to the magic sum s of the magic square.
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
NASA Technical Reports Server (NTRS)
Solomon, G.
1993-01-01
A (72,36;15) box code is constructed as a 9 x 8 matrix whose columns add to form an extended BCH-Hamming (8,4;4) code and whose rows sum to odd or even parity. The newly constructed code, due to its matrix form, is easily decodable for all seven-error and many eight-error patterns. The code comes from a slight modification in the parity (eighth) dimension of the Reed-Solomon (8,4;5) code over GF(512). Error correction uses the row sum parity information to detect errors, which then become erasures in a Reed-Solomon correction algorithm.
Li, Jizhou; Zhou, Yongjin; Zheng, Yong-Ping; Li, Guanglin
2015-08-01
Muscle force output is an essential index in rehabilitation assessment or physical exams, and could provide considerable insights for various applications such as load monitoring and muscle assessment in sports science or rehabilitation therapy. Besides direct measurement of force output using a dynamometer, electromyography has earlier been used in several studies to quantify muscle force as an indirect means. However, its spatial resolution is easily compromised as a summation of the action potentials from neighboring motor units of electrode site. To explore an alternative method to indirectly estimate the muscle force output, and with better muscle specificity, we started with an investigation on the relationship between architecture dynamics and force output of triceps surae. The muscular architecture dynamics is captured in ultrasonography sequences and estimated using a previously reported motion estimation method. Then an indicator named as the dorsoventrally averaged motion profile (DAMP) is employed. The performance of force output is represented by an instantaneous version of the rate of force development (RFD), namely I-RFD. From experimental results on ten normal subjects, there were significant correlations between the I-RFD and DAMP for triceps surae, both normalized between 0 and 1, with the sum of squares error at 0.0516±0.0224, R-square at 0.7929±0.0931 and root mean squared error at 0.0159±0.0033. The statistical significance results were less than 0.01. The present study suggested that muscle architecture dynamics extracted from ultrasonography during contraction is well correlated to the I-RFD and it can be a promising option for indirect estimation of muscle force output. Copyright © 2015 Elsevier B.V. All rights reserved.
On sufficient statistics of least-squares superposition of vector sets.
Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M
2015-06-01
The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.
Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia
Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.
2009-01-01
Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.
Accurate motion parameter estimation for colonoscopy tracking using a regression method
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.
2010-03-01
Co-located optical and virtual colonoscopy images have the potential to provide important clinical information during routine colonoscopy procedures. In our earlier work, we presented an optical flow based algorithm to compute egomotion from live colonoscopy video, permitting navigation and visualization of the corresponding patient anatomy. In the original algorithm, motion parameters were estimated using the traditional Least Sum of squares(LS) procedure which can be unstable in the context of optical flow vectors with large errors. In the improved algorithm, we use the Least Median of Squares (LMS) method, a robust regression method for motion parameter estimation. Using the LMS method, we iteratively analyze and converge toward the main distribution of the flow vectors, while disregarding outliers. We show through three experiments the improvement in tracking results obtained using the LMS method, in comparison to the LS estimator. The first experiment demonstrates better spatial accuracy in positioning the virtual camera in the sigmoid colon. The second and third experiments demonstrate the robustness of this estimator, resulting in longer tracked sequences: from 300 to 1310 in the ascending colon, and 410 to 1316 in the transverse colon.
On the Denesting of Nested Square Roots
ERIC Educational Resources Information Center
Gkioulekas, Eleftherios
2017-01-01
We present the basic theory of denesting nested square roots, from an elementary point of view, suitable for lower level coursework. Necessary and sufficient conditions are given for direct denesting, where the nested expression is rewritten as a sum of square roots of rational numbers, and for indirect denesting, where the nested expression is…
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
Parameterization of cloud lidar backscattering profiles by means of asymmetrical Gaussians
NASA Astrophysics Data System (ADS)
del Guasta, Massimo; Morandi, Marco; Stefanutti, Leopoldo
1995-06-01
A fitting procedure for cloud lidar data processing is shown that is based on the computation of the first three moments of the vertical-backscattering (or -extinction) profile. Single-peak clouds or single cloud layers are approximated to asymmetrical Gaussians. The algorithm is particularly stable with respect to noise and processing errors, and it is much faster than the equivalent least-squares approach. Multilayer clouds can easily be treated as a sum of single asymmetrical Gaussian peaks. The method is suitable for cloud-shape parametrization in noisy lidar signatures (like those expected from satellite lidars). It also permits an improvement of cloud radiative-property computations that are based on huge lidar data sets for which storage and careful examination of single lidar profiles can't be carried out.
Flat-fielding of Solar Hα Observations Based on the Maximum Correntropy Criterion
NASA Astrophysics Data System (ADS)
Xu, Gao-Gui; Zheng, Sheng; Lin, Gang-Hua; Wang, Xiao-Fan
2016-08-01
The flat-field CCD calibration method of Kuhn et al. (KLL) is an efficient method for flat-fielding. However, since it depends on the minimum of the sum of squares error (SSE), its solution is sensitive to noise, especially non-Gaussian noise. In this paper, a new algorithm is proposed to determine the flat field. The idea is to change the criterion of gain estimate from SSE to the maximum correntropy. The result of a test on simulated data demonstrates that our method has a higher accuracy and a faster convergence than KLL’s and Chae’s. It has been found that the method effectively suppresses noise, especially in the case of typical non-Gaussian noise. And the computing time of our algorithm is the shortest.
Silva, Arnaldo F; Richter, Wagner E; Meneses, Helen G C; Bruns, Roy E
2014-11-14
Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
A study on the achievable data rate in massive MIMO system
NASA Astrophysics Data System (ADS)
Salh, Adeeb; Audah, Lukman; Shah, Nor Shahida M.; Hamzah, Shipun A.
2017-09-01
The achievable high data rates depend on the ability of massive multi-input-multi-output (MIMO) for the fifth-generation (5G) cellular networks, where the massive MIMO systems can support very high energy and spectral efficiencies. A major challenge in mobile broadband networks is how to support the throughput in the future 5G, where the highlight of 5G expected to provide high speed internet for every user. The performance massive MIMO system increase with linear minimum mean square error (MMSE), zero forcing (ZF) and maximum ratio transmission (MRT) when the number of antennas increases to infinity, by deriving the closed-form approximation for achievable data rate expressions. Meanwhile, the high signal-to-noise ratio (SNR) can be mitigated by using MMSE, ZF and MRT, which are used to suppress the inter-cell interference signals between neighboring cells. The achievable sum rate for MMSE is improved based on the distributed users inside cell, mitigated the inter-cell interference caused when send the same signal by other cells. By contrast, MMSE is better than ZF in perfect channel state information (CSI) for approximately 20% of the achievable sum rate.
A Fourier method for the analysis of exponential decay curves.
Provencher, S W
1976-01-01
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.
Zhu, Yuanheng; Zhao, Dongbin; Li, Xiangjun
2017-03-01
H ∞ control is a powerful method to solve the disturbance attenuation problems that occur in some control systems. The design of such controllers relies on solving the zero-sum game (ZSG). But in practical applications, the exact dynamics is mostly unknown. Identification of dynamics also produces errors that are detrimental to the control performance. To overcome this problem, an iterative adaptive dynamic programming algorithm is proposed in this paper to solve the continuous-time, unknown nonlinear ZSG with only online data. A model-free approach to the Hamilton-Jacobi-Isaacs equation is developed based on the policy iteration method. Control and disturbance policies and value are approximated by neural networks (NNs) under the critic-actor-disturber structure. The NN weights are solved by the least-squares method. According to the theoretical analysis, our algorithm is equivalent to a Gauss-Newton method solving an optimization problem, and it converges uniformly to the optimal solution. The online data can also be used repeatedly, which is highly efficient. Simulation results demonstrate its feasibility to solve the unknown nonlinear ZSG. When compared with other algorithms, it saves a significant amount of online measurement time.
1984-05-01
Control Ignored any error of 1/10th degree or less. This was done by setting the error term E and the integral sum PREINT to zero If then absolute value of...signs of two errors jeq tdiff if equal, jump clr @preint else zero integal sum tdiff mov @diff,rl fetch absolute value of OAT-RAT ci rl,25 is...includes a heating coil and thermostatic control to maintain the air in this path at an elevated temperature, typically around 80 degrees Farenheit (80 F
On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A
NASA Astrophysics Data System (ADS)
Blake, J. B.; Mandel, R.
1987-02-01
The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.
On the Partitioning of Squared Euclidean Distance and Its Applications in Cluster Analysis.
ERIC Educational Resources Information Center
Carter, Randy L.; And Others
1989-01-01
The partitioning of squared Euclidean--E(sup 2)--distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed. Applications to specific cluster analysis problems are provided (i.e., to design Monte Carlo studies for performance comparisons of several clustering methods…
Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis
NASA Technical Reports Server (NTRS)
Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl
2009-01-01
The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.
Fang, Fang; Ni, Bing-Jie; Yu, Han-Qing
2009-06-01
In this study, weighted non-linear least-squares analysis and accelerating genetic algorithm are integrated to estimate the kinetic parameters of substrate consumption and storage product formation of activated sludge. A storage product formation equation is developed and used to construct the objective function for the determination of its production kinetics. The weighted least-squares analysis is employed to calculate the differences in the storage product concentration between the model predictions and the experimental data as the sum of squared weighted errors. The kinetic parameters for the substrate consumption and the storage product formation are estimated to be the maximum heterotrophic growth rate of 0.121/h, the yield coefficient of 0.44 mg CODX/mg CODS (COD, chemical oxygen demand) and the substrate half saturation constant of 16.9 mg/L, respectively, by minimizing the objective function using a real-coding-based accelerating genetic algorithm. Also, the fraction of substrate electrons diverted to the storage product formation is estimated to be 0.43 mg CODSTO/mg CODS. The validity of our approach is confirmed by the results of independent tests and the kinetic parameter values reported in literature, suggesting that this approach could be useful to evaluate the product formation kinetics of mixed cultures like activated sludge. More importantly, as this integrated approach could estimate the kinetic parameters rapidly and accurately, it could be applied to other biological processes.
Inertial sensor-based smoother for gait analysis.
Suh, Young Soo
2014-12-17
An off-line smoother algorithm is proposed to estimate foot motion using an inertial sensor unit (three-axis gyroscopes and accelerometers) attached to a shoe. The smoother gives more accurate foot motion estimation than filter-based algorithms by using all of the sensor data instead of using the current sensor data. The algorithm consists of two parts. In the first part, a Kalman filter is used to obtain initial foot motion estimation. In the second part, the error in the initial estimation is compensated using a smoother, where the problem is formulated in the quadratic optimization problem. An efficient solution of the quadratic optimization problem is given using the sparse structure. Through experiments, it is shown that the proposed algorithm can estimate foot motion more accurately than a filter-based algorithm with reasonable computation time. In particular, there is significant improvement in the foot motion estimation when the foot is moving off the floor: the z-axis position error squared sum (total time: 3.47 s) when the foot is in the air is 0.0807 m2 (Kalman filter) and 0.0020 m2 (the proposed smoother).
Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems
NASA Astrophysics Data System (ADS)
Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka
2018-06-01
One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.
Multivariate Welch t-test on distances
2016-01-01
Motivation: Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. Results: We develop a solution in the form of a distance-based Welch t-test, TW2, for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and TW2 in reanalysis of two existing microbiome datasets, where the methodology has originated. Availability and Implementation: The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2. Further guidance on application of these methods can be obtained from the author. Contact: alekseye@musc.edu PMID:27515741
Multivariate Welch t-test on distances.
Alekseyenko, Alexander V
2016-12-01
Permutational non-Euclidean analysis of variance, PERMANOVA, is routinely used in exploratory analysis of multivariate datasets to draw conclusions about the significance of patterns visualized through dimension reduction. This method recognizes that pairwise distance matrix between observations is sufficient to compute within and between group sums of squares necessary to form the (pseudo) F statistic. Moreover, not only Euclidean, but arbitrary distances can be used. This method, however, suffers from loss of power and type I error inflation in the presence of heteroscedasticity and sample size imbalances. We develop a solution in the form of a distance-based Welch t-test, [Formula: see text], for two sample potentially unbalanced and heteroscedastic data. We demonstrate empirically the desirable type I error and power characteristics of the new test. We compare the performance of PERMANOVA and [Formula: see text] in reanalysis of two existing microbiome datasets, where the methodology has originated. The source code for methods and analysis of this article is available at https://github.com/alekseyenko/Tw2 Further guidance on application of these methods can be obtained from the author. alekseye@musc.edu. © The Author 2016. Published by Oxford University Press.
Control techniques to improve Space Shuttle solid rocket booster separation
NASA Technical Reports Server (NTRS)
Tomlin, D. D.
1983-01-01
The present Space Shuttle's control system does not prevent the Orbiter's main engines from being in gimbal positions that are adverse to solid rocket booster separation. By eliminating the attitude error and attitude rate feedback just prior to solid rocket booster separation, the detrimental effects of the Orbiter's main engines can be reduced. In addition, if angular acceleration feedback is applied, the gimbal torques produced by the Orbiter's engines can reduce the detrimental effects of the aerodynamic torques. This paper develops these control techniques and compares the separation capability of the developed control systems. Currently with the worst case initial conditions and each Shuttle system dispersion aligned in the worst direction (which is more conservative than will be experienced in flight), the solid rocket booster has an interference with the Shuttle's external tank of 30 in. Elimination of the attitude error and attitude rate feedback reduces that interference to 19 in. Substitution of angular acceleration feedback reduces the interference to 6 in. The two latter interferences can be eliminated by atess conservative analysis techniques, that is, by using a root sum square of the system dispersions.
AKLSQF - LEAST SQUARES CURVE FITTING
NASA Technical Reports Server (NTRS)
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
1980-08-01
varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of
The Weighted-Average Lagged Ensemble.
DelSole, T; Trenary, L; Tippett, M K
2017-11-01
A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.
A simplex method for the orbit determination of maneuvering satellites
NASA Astrophysics Data System (ADS)
Chen, JianRong; Li, JunFeng; Wang, XiJing; Zhu, Jun; Wang, DanNa
2018-02-01
A simplex method of orbit determination (SMOD) is presented to solve the problem of orbit determination for maneuvering satellites subject to small and continuous thrust. The objective function is established as the sum of the nth powers of the observation errors based on global positioning satellite (GPS) data. The convergence behavior of the proposed method is analyzed using a range of initial orbital parameter errors and n values to ensure the rapid and accurate convergence of the SMOD. For an uncontrolled satellite, the orbit obtained by the SMOD provides a position error compared with GPS data that is commensurate with that obtained by the least squares technique. For low Earth orbit satellite control, the precision of the acceleration produced by a small pulse thrust is less than 0.1% compared with the calibrated value. The orbit obtained by the SMOD is also compared with weak GPS data for a geostationary Earth orbit satellite over several days. The results show that the position accuracy is within 12.0 m. The working efficiency of the electric propulsion is about 67% compared with the designed value. The analyses provide the guidance for subsequent satellite control. The method is suitable for orbit determination of maneuvering satellites subject to small and continuous thrust.
Least-Squares Analysis of Data with Uncertainty in "y" and "x": Algorithms in Excel and KaleidaGraph
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2018-01-01
For the least-squares analysis of data having multiple uncertain variables, the generally accepted best solution comes from minimizing the sum of weighted squared residuals over all uncertain variables, with, for example, weights in x[subscript i] taken as inversely proportional to the variance [delta][subscript xi][superscript 2]. A complication…
On squares of representations of compact Lie algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeier, Robert, E-mail: robert.zeier@ch.tum.de; Zimborás, Zoltán, E-mail: zimboras@gmail.com
We study how tensor products of representations decompose when restricted from a compact Lie algebra to one of its subalgebras. In particular, we are interested in tensor squares which are tensor products of a representation with itself. We show in a classification-free manner that the sum of multiplicities and the sum of squares of multiplicities in the corresponding decomposition of a tensor square into irreducible representations has to strictly grow when restricted from a compact semisimple Lie algebra to a proper subalgebra. For this purpose, relevant details on tensor products of representations are compiled from the literature. Since the summore » of squares of multiplicities is equal to the dimension of the commutant of the tensor-square representation, it can be determined by linear-algebra computations in a scenario where an a priori unknown Lie algebra is given by a set of generators which might not be a linear basis. Hence, our results offer a test to decide if a subalgebra of a compact semisimple Lie algebra is a proper one without calculating the relevant Lie closures, which can be naturally applied in the field of controlled quantum systems.« less
An analysis of the least-squares problem for the DSN systematic pointing error model
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1991-01-01
A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.
Spacecraft attitude determination using a second-order nonlinear filter
NASA Technical Reports Server (NTRS)
Vathsal, S.
1987-01-01
The stringent attitude determination accuracy and faster slew maneuver requirements demanded by present-day spacecraft control systems motivate the development of recursive nonlinear filters for attitude estimation. This paper presents the second-order filter development for the estimation of attitude quaternion using three-axis gyro and star tracker measurement data. Performance comparisons have been made by computer simulation of system models and filter mechanization. It is shown that the second-order filter consistently performs better than the extended Kalman filter when the performance index of the root sum square estimation error of the quaternion vector is compared. The second-order filter identifies the gyro drift rates faster than the extended Kalman filter. The uniqueness of this algorithm is the online generation of the time-varying process and measurement noise covariance matrices, derived as a function or the process and measurement nonlinearity, respectively.
Kim, Stephanie; Eliot, Melissa; Koestler, Devin C; Houseman, Eugene A; Wetmur, James G; Wiencke, John K; Kelsey, Karl T
2016-09-01
We examined whether variation in blood-based epigenome-wide association studies could be more completely explained by augmenting existing reference DNA methylation libraries. We compared existing and enhanced libraries in predicting variability in three publicly available 450K methylation datasets that collected whole-blood samples. Models were fit separately to each CpG site and used to estimate the additional variability when adjustments for cell composition were made with each library. Calculation of the mean difference in the CpG-specific residual sums of squares error between models for an arthritis, aging and metabolic syndrome dataset, indicated that an enhanced library explained significantly more variation across all three datasets (p < 10(-3)). Pathologically important immune cell subtypes can explain important variability in epigenome-wide association studies done in blood.
The Approximation of Two-Mode Proximity Matrices by Sums of Order-Constrained Matrices.
ERIC Educational Resources Information Center
Hubert, Lawrence; Arabie, Phipps
1995-01-01
A least-squares strategy is proposed for representing a two-mode proximity matrix as an approximate sum of a small number of matrices that satisfy certain simple order constraints on their entries. The primary class of constraints considered defines Q-forms for particular conditions in a two-mode matrix. (SLD)
On the complexity of some quadratic Euclidean 2-clustering problems
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Pyatkin, A. V.
2016-03-01
Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).
Zhuo, Lin; Tao, Hong; Wei, Hong; Chengzhen, Wu
2016-01-01
We tried to establish compatible carbon content models of individual trees for a Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) plantation from Fujian province in southeast China. In general, compatibility requires that the sum of components equal the whole tree, meaning that the sum of percentages calculated from component equations should equal 100%. Thus, we used multiple approaches to simulate carbon content in boles, branches, foliage leaves, roots and the whole individual trees. The approaches included (i) single optimal fitting (SOF), (ii) nonlinear adjustment in proportion (NAP) and (iii) nonlinear seemingly unrelated regression (NSUR). These approaches were used in combination with variables relating diameter at breast height (D) and tree height (H), such as D, D2H, DH and D&H (where D&H means two separate variables in bivariate model). Power, exponential and polynomial functions were tested as well as a new general function model was proposed by this study. Weighted least squares regression models were employed to eliminate heteroscedasticity. Model performances were evaluated by using mean residuals, residual variance, mean square error and the determination coefficient. The results indicated that models with two dimensional variables (DH, D2H and D&H) were always superior to those with a single variable (D). The D&H variable combination was found to be the most useful predictor. Of all the approaches, SOF could establish a single optimal model separately, but there were deviations in estimating results due to existing incompatibilities, while NAP and NSUR could ensure predictions compatibility. Simultaneously, we found that the new general model had better accuracy than others. In conclusion, we recommend that the new general model be used to estimate carbon content for Chinese fir and considered for other vegetation types as well. PMID:26982054
Prediction of Maximal Oxygen Uptake by Six-Minute Walk Test and Body Mass Index in Healthy Boys.
Jalili, Majid; Nazem, Farzad; Sazvar, Akbar; Ranjbar, Kamal
2018-05-14
To develop an equation to predict maximal oxygen uptake (VO2max) based on the 6-minute walk test (6MWT) and body composition in healthy boys. Direct VO2max, 6-minute walk distance, and anthropometric characteristics were measured in 349 healthy boys (12.49 ± 2.72 years). Multiple regression analysis was used to generate VO2max prediction equations. Cross-validation of the VO2max prediction equations was assessed with predicted residual sum of squares statistics. Pearson correlation was used to assess the correlation between measured and predicted VO2max. Objectively measured VO2max had a significant correlation with demographic and 6MWT characteristics (R = 0.11-0.723, P < .01). Multiple regression analysis revealed the following VO2max prediction equation: VO2max (mL/kg/min) = 12.701 + (0.06 × 6-minute walk distance m ) - (0.732 × body mass index kg/m2 ) (R 2 = 0.79, standard error of the estimate [SEE] = 2.91 mL/kg/min, %SEE = 6.9%). There was strong correlation between measured and predicted VO2max (r = 0.875, P < .001). Cross-validation revealed minimal shrinkage (R 2 p = 0.78 and predicted residual sum of squares SEE = 2.99 mL/kg/min). This study provides a relatively accurate and convenient VO2max prediction equation based on the 6MWT and body mass index in healthy boys. This model can be used for evaluation of cardiorespiratory fitness of boys in different settings. Copyright © 2018 Elsevier Inc. All rights reserved.
Least-Squares Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Kantak, Anil V.
1990-01-01
Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.
Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan
2018-01-12
Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
Goicoechea, H C; Olivieri, A C
2001-07-01
A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.
NASA Astrophysics Data System (ADS)
Al-Omran, Abdulrasoul M.; Aly, Anwar A.; Al-Wabel, Mohammad I.; Al-Shayaa, Mohammad S.; Sallam, Abdulazeam S.; Nadeem, Mahmoud E.
2017-11-01
The analyses of 180 groundwater samples of Al-Kharj, Saudi Arabia, recorded that most groundwaters are unsuitable for drinking uses due to high salinity; however, they can be used for irrigation with some restriction. The electric conductivity of studied groundwater ranged between 1.05 and 10.15 dS m-1 with an average of 3.0 dS m-1. Nitrate was also found in high concentration in some groundwater. Piper diagrams revealed that the majority of water samples are magnesium-calcium/sulfate-chloride water type. The Gibbs's diagram revealed that the chemical weathering of rock-forming minerals and evaporation are influencing the groundwater chemistry. A kriging method was used for predicting spatial distribution of salinity (EC dS m-1) and NO3 - (mg L-1) in Al-Kharj's groundwater using data of 180 different locations. After normalization of data, variogram was drawn, for selecting suitable model for fitness on experimental variogram, less residual sum of squares value was used. Then cross-validation and root mean square error were used to select the best method for interpolation. The kriging method was found suitable methods for groundwater interpolation and management using either GS+ or ArcGIS.
Health Equity and the Fallacy of Treating Causes of Population Health as if They Sum to 100.
Krieger, Nancy
2017-04-01
Numerous examples exist in population health of work that erroneously forces the causes of health to sum to 100%. This is surprising. Clear refutations of this error extend back 80 years. Because public health analysis, action, and allocation of resources are ill served by faulty methods, I consider why this error persists. I first review several high-profile examples, including Doll and Peto's 1981 opus on the causes of cancer and its current interpretations; a 2015 high-publicity article in Science claiming that two thirds of cancer is attributable to chance; and the influential Web site "County Health Rankings & Roadmaps: Building a Culture of Health, County by County," whose model sums causes of health to equal 100%: physical environment (10%), social and economic factors (40%), clinical care (20%), and health behaviors (30%). Critical analysis of these works and earlier historical debates reveals that underlying the error of forcing causes of health to sum to 100% is the still dominant but deeply flawed view that causation can be parsed as nature versus nurture. Better approaches exist for tallying risk and monitoring efforts to reach health equity.
Health Equity and the Fallacy of Treating Causes of Population Health as if They Sum to 100%
2017-01-01
Numerous examples exist in population health of work that erroneously forces the causes of health to sum to 100%. This is surprising. Clear refutations of this error extend back 80 years. Because public health analysis, action, and allocation of resources are ill served by faulty methods, I consider why this error persists. I first review several high-profile examples, including Doll and Peto’s 1981 opus on the causes of cancer and its current interpretations; a 2015 high-publicity article in Science claiming that two thirds of cancer is attributable to chance; and the influential Web site “County Health Rankings & Roadmaps: Building a Culture of Health, County by County,” whose model sums causes of health to equal 100%: physical environment (10%), social and economic factors (40%), clinical care (20%), and health behaviors (30%). Critical analysis of these works and earlier historical debates reveals that underlying the error of forcing causes of health to sum to 100% is the still dominant but deeply flawed view that causation can be parsed as nature versus nurture. Better approaches exist for tallying risk and monitoring efforts to reach health equity. PMID:28272952
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2009-01-01
Given a system which can fail in 1 or n different ways, a fault detection and isolation (FDI) algorithm uses sensor data in order to determine which fault is the most likely to have occurred. The effectiveness of an FDI algorithm can be quantified by a confusion matrix, which i ndicates the probability that each fault is isolated given that each fault has occurred. Confusion matrices are often generated with simulation data, particularly for complex systems. In this paper we perform FDI using sums of squares of sensor residuals (SSRs). We assume that the sensor residuals are Gaussian, which gives the SSRs a chi-squared distribution. We then generate analytic lower and upper bounds on the confusion matrix elements. This allows for the generation of optimal sensor sets without numerical simulations. The confusion matrix bound s are verified with simulated aircraft engine data.
Bakker, Marjan; Wicherts, Jelte M
2014-09-01
In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Variability in Objective Refraction for Persons with Down Syndrome.
Marsack, Jason D; Ravikumar, Ayeswarya; Benoit, Julia S; Anderson, Heather A
2017-05-01
Down syndrome (DS) is associated with ocular and cognitive sequelae, which both have the potential to influence clinical measures of refractive error. This study compares variability of autorefraction among subjects with and without DS. Grand Seiko autorefraction was performed on 139 subjects with DS (age: 8-55, mean: 25 ± 9 yrs) and 138 controls (age: 7-59, mean: 25 ± 10 yrs). Subjects with three refraction measures per eye (DS: 113, control: 136) were included for analysis. Each refraction was converted to power vector notation (M, J0, J45) and a difference in each component (ΔM, ΔJ0, ΔJ45) was calculated for each refraction pairing. From these quantities, average dioptric strength ((Equation is included in full-text article.): square root of the sum of the squares of M, J0, and J45) and average dioptric difference ((Equation is included in full-text article.): square root of the sum of the squares of ΔM, ΔJ0, and ΔJ45) were calculated. The DS group exhibited a greater median (Equation is included in full-text article.)(1Q: 1.38D M: 2.38D 3Q: 3.41D) than control eyes (1Q: 0.47D M: 0.96D 3Q: 2.75D) (P < .001). Likewise, the DS group exhibited a greater median (Equation is included in full-text article.)in refraction (1Q: 0.27D M: 0.42D 3Q: 0.78D) than control eyes (1Q: 0.11D M: 0.15D 3Q: 0.23D) (P < .001) with 97.1% of control eyes exhibiting (Equation is included in full-text article.)≤0.50D, compared to 59.3% of DS eyes. An effect of (Equation is included in full-text article.)on (Equation is included in full-text article.)was not detected (P = .3009) nor was a significant interaction between (Equation is included in full-text article.)and group detected (P = .49). In the current study, comparing three autorefraction readings, median total dioptric difference with autorefraction in DS was 2.8 times the levels observed in controls, indicating greater potential uncertainty in objective measures of refraction for this population. The analysis demonstrates that J45 is highly contributory to the observed variability.
Spatial and temporal variability of precipitation in Serbia for the period 1961-2010
NASA Astrophysics Data System (ADS)
Milovanović, Boško; Schuster, Phillip; Radovanović, Milan; Vakanjac, Vesna Ristić; Schneider, Christoph
2017-10-01
Monthly, seasonal and annual sums of precipitation in Serbia were analysed in this paper for the period 1961-2010. Latitude, longitude and altitude of 421 precipitation stations and terrain features in their close environment (slope and aspect of terrain within a radius of 10 km around the station) were used to develop a regression model on which spatial distribution of precipitation was calculated. The spatial distribution of annual, June (maximum values for almost all of the stations) and February (minimum values for almost all of the stations) precipitation is presented. Annual precipitation amounts ranged from 500 to 600 mm to over 1100 mm. June precipitation ranged from 60 to 140 mm and February precipitation from 30 to 100 mm. The validation results expressed as root mean square error (RMSE) for monthly sums ranged from 3.9 mm in October (7.5% of the average precipitation for this month) to 6.2 mm in April (10.4%). For seasonal sums, RMSE ranged from 10.4 mm during autumn (6.1% of the average precipitation for this season) to 20.5 mm during winter (13.4%). On the annual scale, RMSE was 68 mm (9.5% of the average amount of precipitation). We further analysed precipitation trends using Sen's estimation, while the Mann-Kendall test was used for testing the statistical significance of the trends. For most parts of Serbia, the mean annual precipitation trends fell between -5 and +5 and +5 and +15 mm/decade. June precipitation trends were mainly between -8 and +8 mm/decade. February precipitation trends generally ranged from -3 to +3 mm/decade.
Xu, Di; Chai, Meiyun; Dong, Zhujun; Rahman, Md Maksudur; Yu, Xi; Cai, Junmeng
2018-06-04
The kinetic compensation effect in the logistic distributed activation energy model (DAEM) for lignocellulosic biomass pyrolysis was investigated. The sum of square error (SSE) surface tool was used to analyze two theoretically simulated logistic DAEM processes for cellulose and xylan pyrolysis. The logistic DAEM coupled with the pattern search method for parameter estimation was used to analyze the experimental data of cellulose pyrolysis. The results showed that many parameter sets of the logistic DAEM could fit the data at different heating rates very well for both simulated and experimental processes, and a perfect linear relationship between the logarithm of the frequency factor and the mean value of the activation energy distribution was found. The parameters of the logistic DAEM can be estimated by coupling the optimization method and isoconversional kinetic methods. The results would be helpful for chemical kinetic analysis using DAEM. Copyright © 2018 Elsevier Ltd. All rights reserved.
Fu, Yulong; Ma, Jing; Tan, Liying; Yu, Siyuan; Lu, Gaoyuan
2018-04-10
In this paper, new expressions of the channel-correlation coefficient and its components (the large- and small-scale channel-correlation coefficients) for a plane wave are derived for a horizontal link in moderate-to-strong non-Kolmogorov turbulence using a generalized effective atmospheric spectrum which includes finite-turbulence inner and outer scales and high-wave-number "bump". The closed-form expression of the average bit error rate (BER) of the coherent free-space optical communication system is derived using the derived channel-correlation coefficients and an α-μ distribution to approximate the sum of the square root of arbitrarily correlated Gamma-Gamma random variables. Analytical results are provided to investigate the channel correlation and evaluate the average BER performance. The validity of the proposed approximation is illustrated by Monte Carlo simulations. This work will help with further investigation of the fading correlation in spatial diversity systems.
Interval Predictor Models for Data with Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Lacerda, Marcio J.; Crespo, Luis G.
2017-01-01
An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.
Satisfiability of logic programming based on radial basis function neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged
2014-07-10
In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We appliedmore » the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.« less
Gain weighted eigenspace assignment
NASA Technical Reports Server (NTRS)
Davidson, John B.; Andrisani, Dominick, II
1994-01-01
This report presents the development of the gain weighted eigenspace assignment methodology. This provides a designer with a systematic methodology for trading off eigenvector placement versus gain magnitudes, while still maintaining desired closed-loop eigenvalue locations. This is accomplished by forming a cost function composed of a scalar measure of error between desired and achievable eigenvectors and a scalar measure of gain magnitude, determining analytical expressions for the gradients, and solving for the optimal solution by numerical iteration. For this development the scalar measure of gain magnitude is chosen to be a weighted sum of the squares of all the individual elements of the feedback gain matrix. An example is presented to demonstrate the method. In this example, solutions yielding achievable eigenvectors close to the desired eigenvectors are obtained with significant reductions in gain magnitude compared to a solution obtained using a previously developed eigenspace (eigenstructure) assignment method.
NASA Technical Reports Server (NTRS)
Liu, G.
1985-01-01
One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.
Visco-acoustic wave-equation traveltime inversion and its sensitivity to attenuation errors
NASA Astrophysics Data System (ADS)
Yu, Han; Chen, Yuqing; Hanafy, Sherif M.; Huang, Jiangping
2018-04-01
A visco-acoustic wave-equation traveltime inversion method is presented that inverts for the shallow subsurface velocity distribution. Similar to the classical wave equation traveltime inversion, this method finds the velocity model that minimizes the squared sum of the traveltime residuals. Even though, wave-equation traveltime inversion can partly avoid the cycle skipping problem, a good initial velocity model is required for the inversion to converge to a reasonable tomogram with different attenuation profiles. When Q model is far away from the real model, the final tomogram is very sensitive to the starting velocity model. Nevertheless, a minor or moderate perturbation of the Q model from the true one does not strongly affect the inversion if the low wavenumber information of the initial velocity model is mostly correct. These claims are validated with numerical tests on both the synthetic and field data sets.
NASA Technical Reports Server (NTRS)
Stankiewicz, N.
1982-01-01
The multiple channel input signal to a soft limiter amplifier as a traveling wave tube is represented as a finite, linear sum of Gaussian functions in the frequency domain. Linear regression is used to fit the channel shapes to a least squares residual error. Distortions in output signal, namely intermodulation products, are produced by the nonlinear gain characteristic of the amplifier and constitute the principal noise analyzed in this study. The signal to noise ratios are calculated for various input powers from saturation to 10 dB below saturation for two specific distributions of channels. A criterion for the truncation of the series expansion of the nonlinear transfer characteristic is given. It is found that he signal to noise ratios are very sensitive to the coefficients used in this expansion. Improper or incorrect truncation of the series leads to ambiguous results in the signal to noise ratios.
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Orthogonal Regression: A Teaching Perspective
ERIC Educational Resources Information Center
Carr, James R.
2012-01-01
A well-known approach to linear least squares regression is that which involves minimizing the sum of squared orthogonal projections of data points onto the best fit line. This form of regression is known as orthogonal regression, and the linear model that it yields is known as the major axis. A similar method, reduced major axis regression, is…
Ruiz, Jonatan R; Ortega, Francisco B; Castro-Piñero, Jose
2014-11-30
We investigated the criterion-related validity and the reliability of the 1/4 mile run-walk test (MRWT) in children and adolescents. A total of 86 children (n=42 girls) completed a maximal graded treadmill test using a gas analyzer and the 1/4MRW test. We investigated the test-retest reliability of the 1/4MRWT in a different group of children and adolescents (n=995, n=418 girls). The 1/4MRWT time, sex, and BMI significantly contributed to predict measured VO2peak (R2= 0.32). There was no systematic bias in the cross-validation group (P>0.1). The root mean sum of squared errors (RMSE) and the percentage error were 6.9 ml/kg/min and 17.7%, respectively, and the accurate prediction (i.e. the percentage of estimations within ±4.5 ml/kg/min of VO2peak) was 48.8%. The reliability analysis showed that the mean inter-trial difference ranged from 0.6 seconds in children aged 6-11 years to 1.3 seconds in adolescents aged 12-17 years (all P. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel.
Selvaprabhu, Poongundran; Chinnadurai, Sunil; Li, Jun; Lee, Moon Ho
2017-08-17
In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K -user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes.
Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel
Li, Jun; Lee, Moon Ho
2017-01-01
In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K-user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes. PMID:28817071
Spectral estimates of net radiation and soil heat flux
Daughtry, C.S.T.; Kustas, William P.; Moran, M.S.; Pinter, P. J.; Jackson, R. D.; Brown, P.W.; Nichols, W.D.; Gay, L.W.
1990-01-01
Conventional methods of measuring surface energy balance are point measurements and represent only a small area. Remote sensing offers a potential means of measuring outgoing fluxes over large areas at the spatial resolution of the sensor. The objective of this study was to estimate net radiation (Rn) and soil heat flux (G) using remotely sensed multispectral data acquired from an aircraft over large agricultural fields. Ground-based instruments measured Rn and G at nine locations along the flight lines. Incoming fluxes were also measured by ground-based instruments. Outgoing fluxes were estimated using remotely sensed data. Remote Rn, estimated as the algebraic sum of incoming and outgoing fluxes, slightly underestimated Rn measured by the ground-based net radiometers. The mean absolute errors for remote Rn minus measured Rn were less than 7%. Remote G, estimated as a function of a spectral vegetation index and remote Rn, slightly overestimated measured G; however, the mean absolute error for remote G was 13%. Some of the differences between measured and remote values of Rn and G are associated with differences in instrument designs and measurement techniques. The root mean square error for available energy (Rn - G) was 12%. Thus, methods using both ground-based and remotely sensed data can provide reliable estimates of the available energy which can be partitioned into sensible and latent heat under nonadvective conditions. ?? 1990.
SEU System Analysis: Not Just the Sum of All Parts
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth
2014-01-01
Single event upset (SEU) analysis of complex systems is challenging. Currently, system SEU analysis is performed by component level partitioning and then either: the most dominant SEU cross-sections (SEUs) are used in system error rate calculations; or the partition SEUs are summed to eventually obtain a system error rate. In many cases, system error rates are overestimated because these methods generally overlook system level derating factors. The problem with overestimating is that it can cause overdesign and consequently negatively affect the following: cost, schedule, functionality, and validation/verification. The scope of this presentation is to discuss the risks involved with our current scheme of SEU analysis for complex systems; and to provide alternative methods for improvement.
Parallel magnetic resonance imaging using coils with localized sensitivities.
Goldfarb, James W; Holland, Agnes E
2004-09-01
The purpose of this study was to present clinical examples and illustrate the inefficiencies of a conventional reconstruction using a commercially available phased array coil with localized sensitivities. Five patients were imaged at 1.5 T using a cardiac-synchronized gadolinium-enhanced acquisition and a commercially available four-element phased array coil. Four unique sets of images were reconstructed from the acquired k-space data: (a) sum-of-squares image using four elements of the coil; localized sum-of-squares images from the (b) anterior coils and (c) posterior coils and a (c) local reconstruction. Images were analyzed for artifacts and usable field-of-view. Conventional image reconstruction produced images with fold-over artifacts in all cases spanning a portion of the image (mean 90 mm; range 36-126 mm). The local reconstruction removed fold-over artifacts and resulted in an effective increase in the field-of-view (mean 50%; range 20-70%). Commercially available phased array coils do not always have overlapping sensitivities. Fold-over artifacts can be removed using an alternate reconstruction method. When assessing the advantages of parallel imaging techniques, gains achieved using techniques such as SENSE and SMASH should be gauged against the acquisition time of the localized method rather than the conventional sum-of-squares method.
A new enhanced index tracking model in portfolio optimization with sum weighted approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng
2017-04-01
Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.
NASA Astrophysics Data System (ADS)
Nema, Manish K.; Khare, Deepak; Chandniha, Surendra K.
2017-11-01
Estimation of evapotranspiration (ET) is an essential component of the hydrologic cycle, which is also requisite for efficient irrigation water management planning and hydro-meteorological studies at both the basin and catchment scales. There are about twenty well-established methods available for ET estimation which depends upon various meteorological parameters and assumptions. Most of these methods are physically based and need a variety of input data. The FAO-56 Penman-Monteith method (PM) for estimating reference evapotranspiration (ET0) is recommend for irrigation scheduling worldwide, because PM generally yields the best results under various climatic conditions. This study investigates the abilities of artificial neural networks (ANN) to improve the accuracy of monthly evaporation estimation in sub-humid climatic region of Dehradun. In the first part of the study, different ANN models, comprising various combinations of training function and number of neutrons were developed to estimate the ET0 and it has been compared with the Penman-Monteith (PM) ET0 as the ideal (observed) ET0. Various statistical approaches were considered to estimate the model performance, i.e. Coefficient of Correlation ( r), Sum of Squared Errors, Root Mean Square Error, Nash-Sutcliffe Efficiency Index (NSE) and Mean Absolute Error. The ANN model with Levenberg-Marquardt training algorithm, single hidden layer and nine number of neutron schema was found the best predicting capabilities for the study station with Coefficient of Correlation ( r) and NSE value of 0.996 and 0.991 for calibration period and 0.990 and 0.980 for validation period, respectively. In the subsequent part of the study, the trend analysis of ET0 time series revealed a rising trend in the month of March, and a falling trend during June to November, except August, with more than 90% significance level and the annual declining rate was found to 1.49 mm per year.
Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI
NASA Astrophysics Data System (ADS)
Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.
2017-12-01
Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.
van Toorenburg, Marlies; Oostrom, Janneke K; Pollet, Thomas V
2015-03-01
Résumés are screened rapidly, with some reports stating that recruiters form their impressions within 10 seconds. Certain résumé characteristics can have a significant impact on the snap judgments these recruiters make. The main goal of the present study was to examine the effect of the e-mail address (formal vs. informal) used in a résumé on the hirability perceptions formed by professional recruiters (N=73). In addition, the effect of the e-mail address on hirability perceptions was compared to the effects of spelling errors and typeface. Participants assessed the cognitive ability, personality, and the hirability of six fictitious applicants for the job of an HR specialist. The hirability ratings for the résumés with informal e-mail addresses were significantly lower than the hirability ratings for résumés that featured a formal e-mail address. The effect of e-mail address was as strong as the effect of spelling errors and stronger than that of typeface. The effect of e-mail address on hirability was mediated by perceptions of conscientiousness and honesty-humility. This study among actual recruiters shows for the first time that the choice of the e-mail address used on a résumé might make a real difference.
NASA Astrophysics Data System (ADS)
Wang, Dong
2018-05-01
Thanks to the great efforts made by Antoni (2006), spectral kurtosis has been recognized as a milestone for characterizing non-stationary signals, especially bearing fault signals. The main idea of spectral kurtosis is to use the fourth standardized moment, namely kurtosis, as a function of spectral frequency so as to indicate how repetitive transients caused by a bearing defect vary with frequency. Moreover, spectral kurtosis is defined based on an analytic bearing fault signal constructed from either a complex filter or Hilbert transform. On the other hand, another attractive work was reported by Borghesani et al. (2014) to mathematically reveal the relationship between the kurtosis of an analytical bearing fault signal and the square of the squared envelope spectrum of the analytical bearing fault signal for explaining spectral correlation for quantification of bearing fault signals. More interestingly, it was discovered that the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum corresponds to the raw 4th order moment. Inspired by the aforementioned works, in this paper, we mathematically show that: (1) spectral kurtosis can be decomposed into squared envelope and squared L2/L1 norm so that spectral kurtosis can be explained as spectral squared L2/L1 norm; (2) spectral L2/L1 norm is formally defined for characterizing bearing fault signals and its two geometrical explanations are made; (3) spectral L2/L1 norm is proportional to the square root of the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum; (4) some extensions of spectral L2/L1 norm for characterizing bearing fault signals are pointed out.
Chemical library subset selection algorithms: a unified derivation using spatial statistics.
Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F
2002-01-01
If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.
Error model for the SAO 1969 standard earth.
NASA Technical Reports Server (NTRS)
Martin, C. F.; Roy, N. A.
1972-01-01
A method is developed for estimating an error model for geopotential coefficients using satellite tracking data. A single station's apparent timing error for each pass is attributed to geopotential errors. The root sum of the residuals for each station also depends on the geopotential errors, and these are used to select an error model. The model chosen is 1/4 of the difference between the SAO M1 and the APL 3.5 geopotential.
A shock-tube measurement of the SiO/E 1 Sigma + - X 1 Sigma +/ transition moment
NASA Technical Reports Server (NTRS)
Park, C.
1978-01-01
The sum of the squares of the electronic transition moments for the (E 1 Sigma +) - (X 1 Sigma +) band system of SiO has been determined from absorption measurements conducted in the reflected-shock region of a shock tube. The test gas produced by shock-heating a mixture of SiCl4, N2O, and Ar, and the spectra were recorded photographically in the 150-230-nm wavelength range. The values of the sum of the squares were determined by comparing the measured absorption spectra with those produced by a line-by-line synthetic spectrum calculation. The value so deduced at an r-centroid value of 3.0 bohr was 0.86 + or - 0.10 atomic unit.
NASA Astrophysics Data System (ADS)
Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry
1998-08-01
All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.
2016-09-01
mean- square (RMS) error of 0.29° at ə° resolution. For a P4 coded signal, the RMS error in estimating the AOA is 0.32° at 1° resolution. 14...FMCW signal, it was demonstrated that the system is capable of estimating the AOA with a root-mean- square (RMS) error of 0.29° at ə° resolution. For a...Modulator PCB printed circuit board PD photodetector RF radio frequency RMS root-mean- square xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii
A Geomagnetic Estimate of Mean Paleointensity
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
2004-01-01
To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate used the modern magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that low degree multi-pole powers of the coresource field are distributed as chi-squared with 2n+1 degrees of freedom and expectation values, where c is the 3480 km radius of the Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity F(exp 2). The sum also estimates F(exp 2) averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes. Additional information is included in the original extended abstract.
NASA Astrophysics Data System (ADS)
Wang, Dong; Ding, Hao; Singh, Vijay P.; Shang, Xiaosan; Liu, Dengfeng; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing
2015-05-01
For scientific and sustainable management of water resources, hydrologic and meteorologic data series need to be often extended. This paper proposes a hybrid approach, named WA-CM (wavelet analysis-cloud model), for data series extension. Wavelet analysis has time-frequency localization features, known as "mathematics microscope," that can decompose and reconstruct hydrologic and meteorologic series by wavelet transform. The cloud model is a mathematical representation of fuzziness and randomness and has strong robustness for uncertain data. The WA-CM approach first employs the wavelet transform to decompose the measured nonstationary series and then uses the cloud model to develop an extension model for each decomposition layer series. The final extension is obtained by summing the results of extension of each layer. Two kinds of meteorologic and hydrologic data sets with different characteristics and different influence of human activity from six (three pairs) representative stations are used to illustrate the WA-CM approach. The approach is also compared with four other methods, which are conventional correlation extension method, Kendall-Theil robust line method, artificial neural network method (back propagation, multilayer perceptron, and radial basis function), and single cloud model method. To evaluate the model performance completely and thoroughly, five measures are used, which are relative error, mean relative error, standard deviation of relative error, root mean square error, and Thiel inequality coefficient. Results show that the WA-CM approach is effective, feasible, and accurate and is found to be better than other four methods compared. The theory employed and the approach developed here can be applied to extension of data in other areas as well.
Analog of the Peter-Weyl expansion for Lorentz group
NASA Astrophysics Data System (ADS)
Perlov, Leonid
2015-11-01
The expansion of a square integrable function on SL(2, C) into the sum of the principal series matrix coefficients with the specially selected representation parameters was recently used in the Loop Quantum Gravity [C. Rovelli and F. Vidotto, Covariant Loop Quantum Gravity: An Elementary Introduction to Quantum Gravity and Spinfoam Theory (Cambridge University Press, Cambridge, 2014) and C. Rovelli, Classical Quantum Gravity 28(11), 114005 (2011)]. In this paper, we prove that the sum used originally in the Loop Quantum Gravity: ∑ j = 0 ∞ ∑ |m| ≤ j ∑ |n| ≤ j Dj m , j n ( j , τ j ) ( g ) , where j, m, n ∈ Z, τ ∈ C is convergent to a function on SL(2, C); however, the limit is not a square integrable function; therefore, such sums cannot be used for the Peter-Weyl like expansion. We propose the alternative expansion and prove that for each fixed m: ∑ j = m ∞ D j m , j m ( j , τ j ) ( g ) is convergent and that the limit is a square integrable function on SL(2, C). We then prove the analog of the Peter-Weyl expansion: any ψ(g) ∈ L2(SL(2, C)) can be decomposed into the sum: ψ ( g ) = ∑ j = m ∞ j 2 ( 1 + τ 2 ) c j m m D j m , j m ( j , τ j ) ( g ) , with the Fourier coefficients c j m m = ∫ S L ( 2 , C ) ψ ( g ) Dj m , j m j , τ j ( g ) ¯ d g , g ∈ SL(2, C), τ ∈ C, τ ≠ i, - i, j, m ∈ Z, m is fixed. We also prove convergence of the sums ∑ j = |p| ∞ ∑ |m| ≤ j ∑ |n| ≤ j dp m /j 2 Dj m , j n ( j , τ j ) ( g ) , where d|p| m /j 2 = ( j + 1 ) /1 2 ∫ S U ( 2 ) ϕ ( u ) D|p| m /j 2 ( u ) ¯ d u is ϕ(u)'s Fourier transform and p, j, m, n ∈ Z, τ ∈ C, u ∈ SU(2), g ∈ SL(2, C), thus establishing the map between the square integrable functions on SU(2) and the space of the functions on SL(2, C). Such maps were first used in Rovelli [Class. Quant. Grav. 28, 11 (2011)].
Determination of suitable drying curve model for bread moisture loss during baking
NASA Astrophysics Data System (ADS)
Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.
2013-03-01
This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.
ERIC Educational Resources Information Center
Stanley, Julian C.; Livingston, Samuel A.
Besides the ubiquitous Pearson product-moment r, there are a number of other measures of relationship that are attenuated by errors of measurement and for which the relationship between true measures can be estimated. Among these are the correlation ratio (eta squared), Kelley's unbiased correlation ratio (epsilon squared), Hays' omega squared,…
Improving Automated Endmember Identification for Linear Unmixing of HyspIRI Spectral Data.
NASA Astrophysics Data System (ADS)
Gader, P.
2016-12-01
The size of data sets produced by imaging spectrometers is increasing rapidly. There is already a processing bottleneck. Part of the reason for this bottleneck is the need for expert input using interactive software tools. This process can be very time consuming and laborious but is currently crucial to ensuring the quality of the analysis. Automated algorithms can mitigate this problem. Although it is unlikely that processing systems can become completely automated, there is an urgent need to increase the level of automation. Spectral unmixing is a key component to processing HyspIRI data. Algorithms such as MESMA have been demonstrated to achieve results but require carefully, expert construction of endmember libraries. Unfortunately, many endmembers found by automated algorithms for finding endmembers are deemed unsuitable by experts because they are not physically reasonable. Unfortunately, endmembers that are not physically reasonable can achieve very low errors between the linear mixing model with those endmembers and the original data. Therefore, this error is not a reasonable way to resolve the problem on "non-physical" endmembers. There are many potential approaches for resolving these issues, including using Bayesian priors, but very little attention has been given to this problem. The study reported on here considers a modification of the Sparsity Promoting Iterated Constrained Endmember (SPICE) algorithm. SPICE finds endmembers and abundances and estimates the number of endmembers. The SPICE algorithm seeks to minimize a quadratic objective function with respect to endmembers E and fractions P. The modified SPICE algorithm, which we refer to as SPICED, is obtained by adding the term D to the objective function. The term D pressures the algorithm to minimize sum of the squared differences between each endmember and a weighted sum of the data. By appropriately modifying the, the endmembers are pushed towards a subset of the data with the potential for becoming exactly equal to the data points. The algorithm has been applied to spectral data and the differences between the endmembers resulting from ecorded. The results so far are that the endmembers found SPICED are approximately 25% closer to the data with indistinguishable reconstruction error compared to those found using SPICE.
Semi-automatic aircraft control system
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor)
1978-01-01
A flight control type system which provides a tactile readout to the hand of a pilot for directing elevator control during both approach to flare-out and departure maneuvers. For altitudes above flare-out, the system sums the instantaneous coefficient of lift signals of a lift transducer with a generated signal representing ideal coefficient of lift for approach to flare-out, i.e., a value of about 30% below stall. Error signals resulting from the summation are read out by the noted tactile device. Below flare altitude, an altitude responsive variation is summed with the signal representing ideal coefficient of lift to provide error signal readout.
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Modeling radiation forces acting on TOPEX/Poseidon for precision orbit determination
NASA Technical Reports Server (NTRS)
Marshall, J. A.; Luthcke, S. B.; Antreasian, P. G.; Rosborough, G. W.
1992-01-01
Geodetic satellites such as GEOSAT, SPOT, ERS-1, and TOPEX/Poseidon require accurate orbital computations to support the scientific data they collect. Until recently, gravity field mismodeling was the major source of error in precise orbit definition. However, albedo and infrared re-radiation, and spacecraft thermal imbalances produce in combination no more than a 6-cm radial root-mean-square (RMS) error over a 10-day period. This requires the development of nonconservative force models that take the satellite's complex geometry, attitude, and surface properties into account. For TOPEX/Poseidon, a 'box-wing' satellite form was investigated that models the satellite as a combination of flat plates arranged in a box shape with a connected solar array. The nonconservative forces acting on each of the eight surfaces are computed independently, yielding vector accelerations which are summed to compute the total aggregate effect on the satellite center-of-mass. In order to test the validity of this concept, 'micro-models' based on finite element analysis of TOPEX/Poseidon were used to generate acceleration histories in a wide variety of orbit orientations. These profiles are then compared to the box-wing model. The results of these simulations and their implication on the ability to precisely model the TOPEX/Poseidon orbit are discussed.
Basha, Shaik; Jaiswar, Santlal; Jha, Bhavanath
2010-09-01
The biosorption equilibrium isotherms of Ni(II) onto marine brown algae Lobophora variegata, which was chemically-modified by CaCl(2) were studied and modeled. To predict the biosorption isotherms and to determine the characteristic parameters for process design, twenty-three one-, two-, three-, four- and five-parameter isotherm models were applied to experimental data. The interaction among biosorbed molecules is attractive and biosorption is carried out on energetically different sites and is an endothermic process. The five-parameter Fritz-Schluender model gives the most accurate fit with high regression coefficient, R (2) (0.9911-0.9975) and F-ratio (118.03-179.96), and low standard error, SE (0.0902-0.0.1556) and the residual or sum of square error, SSE (0.0012-0.1789) values to all experimental data in comparison to other models. The biosorption isotherm models fitted the experimental data in the order: Fritz-Schluender (five-parameter) > Freundlich (two-parameter) > Langmuir (two-parameter) > Khan (three-parameter) > Fritz-Schluender (four-parameter). The thermodynamic parameters such as DeltaG (0), DeltaH (0) and DeltaS (0) have been determined, which indicates the sorption of Ni(II) onto L. variegata was spontaneous and endothermic in nature.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
A new color vision test to differentiate congenital and acquired color vision defects.
Shin, Young Joo; Park, Kyu Hyung; Hwang, Jeong-Min; Wee, Won Ryang; Lee, Jin Hak
2007-07-01
To investigate the efficacy of a novel computer-controlled color test for the differentiation of congenital and acquired color vision deficiency. Observational cross-sectional study. Thirty-one patients with congenital color vision deficiency and 134 patients with acquired color vision deficiency with a Snellen visual acuity better than 20/30 underwent an ophthalmologic examination including the Ishihara color test, Hardy-Rand-Rittler test, Nagel anomaloscopy, and the Seohan computerized hue test between June, 2003, and January, 2004. To investigate the type of color vision defect, a graph of the Seohan computerized hue test was divided into 4 quadrants and error scores in each quadrant were summated. The ratio between the sums of error scores of quadrants I and III (Q1+Q3) and those of quadrants II and IV (Q2+Q4) was calculated. Error scores and ratio in quadrant analysis of the Seohan computerized hue test. The Seohan computerized hue test showed that the sum of Q2+Q4 was significantly higher than the sum of Q1+Q3 in congenital color vision deficiency (P<0.01, paired t test) and that the sum of Q2+Q4 was significantly lower than the sum of Q1+Q3 in acquired color vision deficiency (P<0.01, paired t test). In terms of discriminating congenital and acquired color vision deficiency, the ratio in quadrant analysis had 93.3% sensitivity and 98.5% specificity with a reference value of 1.5 by the Seohan computerized hue test (95% confidence interval). The quadrant analysis and ratio of (Q2+Q4)/(Q1+Q3) using the Seohan computerized hue test effectively differentiated congenital and acquired color vision deficiency.
Wilberg, Dale E.; Stolp, Bernard J.
2005-01-01
This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.
A Survey of Terrain Modeling Technologies and Techniques
2007-09-01
Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera
NASA Astrophysics Data System (ADS)
Rahman, Samiur; Ullah, Sana; Ullah, Sehat
2018-01-01
Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
Nonlinear, discrete flood event models, 1. Bayesian estimation of parameters
NASA Astrophysics Data System (ADS)
Bates, Bryson C.; Townley, Lloyd R.
1988-05-01
In this paper (Part 1), a Bayesian procedure for parameter estimation is applied to discrete flood event models. The essence of the procedure is the minimisation of a sum of squares function for models in which the computed peak discharge is nonlinear in terms of the parameters. This objective function is dependent on the observed and computed peak discharges for several storms on the catchment, information on the structure of observation error, and prior information on parameter values. The posterior covariance matrix gives a measure of the precision of the estimated parameters. The procedure is demonstrated using rainfall and runoff data from seven Australian catchments. It is concluded that the procedure is a powerful alternative to conventional parameter estimation techniques in situations where a number of floods are available for parameter estimation. Parts 2 and 3 will discuss the application of statistical nonlinearity measures and prediction uncertainty analysis to calibrated flood models. Bates (this volume) and Bates and Townley (this volume).
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
Discriminability limits in spatio-temporal stereo block matching.
Jain, Ankit K; Nguyen, Truong Q
2014-05-01
Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.
Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT
NASA Astrophysics Data System (ADS)
Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee
2014-03-01
State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.
Bahaz, Mohamed; Benzid, Redha
2018-03-01
Electrocardiogram (ECG) signals are often contaminated with artefacts and noises which can lead to incorrect diagnosis when they are visually inspected by cardiologists. In this paper, the well-known discrete Fourier series (DFS) is re-explored and an efficient DFS-based method is proposed to reduce contribution of both baseline wander (BW) and powerline interference (PLI) noises in ECG records. In the first step, the determination of the exact number of low frequency harmonics contributing in BW is achieved. Next, the baseline drift is estimated by the sum of all associated Fourier sinusoids components. Then, the baseline shift is discarded efficiently by a subtraction of its approximated version from the original biased ECG signal. Concerning the PLI, the subtraction of the contributing harmonics calculated in the same manner reduces efficiently such type of noise. In addition of visual quality results, the proposed algorithm shows superior performance in terms of higher signal-to-noise ratio and smaller mean square error when faced to the DCT-based algorithm.
Aerodynamic influence coefficient method using singularity splines
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Weber, J. A.; Lesferd, E. P.
1974-01-01
A numerical lifting surface formulation, including computed results for planar wing cases is presented. This formulation, referred to as the vortex spline scheme, combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of loading function methods. The formulation employes a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfied all the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise. Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral. The current formulation uses the elementary horseshoe vortex as the basic singularity and is therefore restricted to linearized potential flow. As part of the study, a non planar development was considered, but the numerical evaluation of the lifting surface concept was restricted to planar configurations. Also, a second order sideslip analysis based on an asymptotic expansion was investigated using the singularity spline formulation.
Bezerra, Rui M F; Fraga, Irene; Dias, Albino A
2013-01-01
Enzyme kinetic parameters are usually determined from initial rates nevertheless, laboratory instruments only measure substrate or product concentration versus reaction time (progress curves). To overcome this problem we present a methodology which uses integrated models based on Michaelis-Menten equation. The most severe practical limitation of progress curve analysis occurs when the enzyme shows a loss of activity under the chosen assay conditions. To avoid this problem it is possible to work with the same experimental points utilized for initial rates determination. This methodology is illustrated by the use of integrated kinetic equations with the well-known reaction catalyzed by alkaline phosphatase enzyme. In this work nonlinear regression was performed with the Solver supplement (Microsoft Office Excel). It is easy to work with and track graphically the convergence of SSE (sum of square errors). The diagnosis of enzyme inhibition was performed according to Akaike information criterion. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
Real estate value prediction using multivariate regression models
NASA Astrophysics Data System (ADS)
Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav
2017-11-01
The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
A Geomagnetic Estimate of Mean Paleointensity
NASA Technical Reports Server (NTRS)
Voorhies, Coerte
2004-01-01
To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate uses the modem magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that the low degree multipole powers of the core-source field are distributed as chi-squared with 2n+l degrees of freedom and expectation values {R(n)} = K[(n+l/2)/n(n+l](c/a)(sup 2n+4), where c is the 3480 km radius of Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity {F(sup 2)}. The sum also estimates {F(sup 2)} averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes.
NASA Technical Reports Server (NTRS)
Middleton, E. M.; Huemmrich, K. F.; Landis, D. R.; Black, T. A.; Barr, A. G.; McCaughey, J. H.
2016-01-01
This study evaluates a direct remote sensing approach from space for the determination of ecosystem photosynthetic light use efficiency (LUE), through measurement of vegetation reflectance changes expressed with the Photochemical Reflectance Index (PRI). The PRI is a normalized difference index based on spectral changes at a physiologically active wavelength (approximately 531 nanometers) as compared to a reference waveband, and is only available from a very few satellites. These include the two Moderate-Resolution Imaging Spectroradiometers (MODIS) on the Aqua and Terra satellites each of which have a narrow (10-nanometer) ocean band centered at 531 nanometers. We examined several PRI variations computed with candidate reference bands, since MODIS lacks the traditional 570-nanometer reference band. The PRI computed using MODIS land band 1 (620-670 nanometers) gave the best performance for daily LUE estimation. Through rigorous statistical analyses over a large image collection (n equals 420), the success of relating in situ daily tower-derived LUE to MODIS observations for northern forests was strongly influenced by satellite viewing geometry. LUE was calculated from CO2 fluxes (moles per moles of carbon absorbed quanta) measured at instrumented Canadian Carbon Program flux towers in four Canadian forests: a mature fir site in British Columbia, mature aspen and black spruce sites in Saskatchewan, and a mixed deciduous/coniferous forest site in Ontario. All aspects of the viewing geometry had significant effects on the MODIS-PRI, including the view zenith angle (VZA), the view azimuth angle, and the displacement of the view azimuth relative to the solar principal plane, in addition to illumination related variables.Nevertheless, we show that forward scatter sector views (VZA, 16 degrees-45 degrees) provided the strongest relationships to daily LUE, especially those collected in the early afternoon by Aqua (r squared = 0.83, RMSE (root mean square error) equals 0.003 moles per moles of carbon absorbed quanta). Nadir (VZA, 0 degrees plus or minus 15 degrees) and backscatter views (VZA, -16 degrees to -45 degrees) had lower performance in estimating LUE (nadir: r squared approximately equal to 0.62-0.67; backscatter: r squared approximately equal to 0.54-0.59) and similar estimation error (RMSE equals 0.004-0.005).When directional effects were not considered, only a moderately successful MODIS-PRI vs. LUE relationship (r squared equals 0.34, RMSE equals 0.007) was obtained in the full dataset (all views & sites, both satellites), but site-specific relationships were able to discriminate between coniferous and deciduous forests. Overall, MODIS-PRI values from Terra (late morning) were higher than those from Aqua (early afternoon), before/after the onset of diurnal stress responses expressed spectrally. Therefore, we identified ninety-two Terra-Aqua "same day" pairs, for which the sum of Terra morning and Aqua afternoon MODIS-PRI values (PRI (sub sum) using all available directional observations was linearly correlated with daily tower LUE (r squared equals 0.622, RMSE equals 0.013) and independent of site differences or meteorological information. Our study highlights the value of off-nadir directional reflectance observations, and the value of pairing morning and afternoon satellite observations to monitor stress responses that inhibit carbon uptake in Canadian forest ecosystems. In addition, we show that MODIS-PRI values, when derived from either: (i) forward views only, or (ii) Terra/Aqua same day (any view) combined observations, provided more accurate estimates of tower-measured daily LUE than those derived from either nadir or backscatter views or those calculated by the widely used semi-operational MODIS GPP model (MOD17) which is based on a theoretical maximum LUE and environmental data. Consequently, we demonstrate the importance of diurnal as well as off-nadir satellite observations for detecting vegetation physiological processes.
Combined proportional and additive residual error models in population pharmacokinetic modelling.
Proost, Johannes H
2017-11-15
In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Linear error analysis of slope-area discharge determinations
Kirby, W.H.
1987-01-01
The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.
Web-Based Model Visualization Tools to Aid in Model Optimization and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Alder, J.; van Griensven, A.; Meixner, T.
2003-12-01
Individuals applying hydrologic models have a need for a quick easy to use visualization tools to permit them to assess and understand model performance. We present here the Interactive Hydrologic Modeling (IHM) visualization toolbox. The IHM utilizes high-speed Internet access, the portability of the web and the increasing power of modern computers to provide an online toolbox for quick and easy model result visualization. This visualization interface allows for the interpretation and analysis of Monte-Carlo and batch model simulation results. Often times a given project will generate several thousands or even hundreds of thousands simulations. This large number of simulations creates a challenge for post-simulation analysis. IHM's goal is to try to solve this problem by loading all of the data into a database with a web interface that can dynamically generate graphs for the user according to their needs. IHM currently supports: a global samples statistics table (e.g. sum of squares error, sum of absolute differences etc.), top ten simulations table and graphs, graphs of an individual simulation using time step data, objective based dotty plots, threshold based parameter cumulative density function graphs (as used in the regional sensitivity analysis of Spear and Hornberger) and 2D error surface graphs of the parameter space. IHM is ideal for the simplest bucket model to the largest set of Monte-Carlo model simulations with a multi-dimensional parameter and model output space. By using a web interface, IHM offers the user complete flexibility in the sense that they can be anywhere in the world using any operating system. IHM can be a time saving and money saving alternative to spending time producing graphs or conducting analysis that may not be informative or being forced to purchase or use expensive and proprietary software. IHM is a simple, free, method of interpreting and analyzing batch model results, and is suitable for novice to expert hydrologic modelers.
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
47 CFR 87.145 - Acceptability of transmitters for licensing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...
Optical Oversampled Analog-to-Digital Conversion
1992-06-29
hologram weights and interconnects in the digital image halftoning configuration. First, no temporal error diffusion occurs in the digital image... halftoning error diffusion ar- chitecture as demonstrated by Equation (6.1). Equation (6.2) ensures that the hologram weights sum to one so that the exact...optimum halftone image should be faster. Similarly, decreased convergence time suggests that an error diffusion filter with larger spatial dimensions
NASA Astrophysics Data System (ADS)
Zarychta, R.; Zarychta, A.
2013-12-01
Extraction of mineral resources, including rocks, usually causes some significant changes of the landscape. Transformation of the relief which character and scale can be analysed by means of cartographic materials seems to be the most interesting. Reconstruction of the relief of the period prior to the exploitation is a starting point for such investigation. It can be done basing on archival cartographic materials which are difficult to obtain. However, too varied morphological material of the area can lead to erroneous conclusions which suggests interpretation of three - dimensional models of the relief. Hence, the paper deals with reconstruction and visualisation of the relief (in the period before the exploitation) of four sand fields of the old sand mine excavation "Siemonia". A geological map of Poland (Wojkowice sheet) has been used for the purpose. A geostatical analysis by means of the programmes Surfer 8 and ArcGIS 10.1. has been performed on the map. An estimation method called ordinary kriging, which is related to B.L.U.E. (best linear unbiased estimator), where the condition of the lack of weight of the measurement (the sum of weight is equal to 1) is fulfilled, has been applied. The calculated values of errors (mean error, mean squared error and mean squared standardised error) obtained as a result of application of the cross - validation procedure are, to a large extent, in agreement with predetermined values of errors given by numerous authors in the scientific literature. It confirms proper "manual" adjustment of two mathematic al models of spherical variograms and empirical variograms. The generated contour map of the investigated area (based on estimated points of sampling in nodes of the interpolation grid) together with its three - dimensional digital model are more adequate (due to significant marking of the relief) to the previous state of the investigated area than the two other presented types of cartographic visualisations made without application of the geostatistical methods. Hence, the graphic presentation of results, mentioned as the last one, can be only applied to visualise the relief without any detailed geomorphological interpretations due to its inaccuracy. It seems to be obvious that detailed analyses can be performed basing on a digital model of the terrain accompanied by its contour map obtained when reconstruction of the relief is made by means of geostatistical methods (especially ordinary kriging).
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda
2018-03-01
A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Newman, J; Egan, T; Harbourne, N; O'Riordan, D; Jacquier, J C; O'Sullivan, M
2014-08-01
Sensory evaluation can be problematic for ingredients with a bitter taste during research and development phase of new food products. In this study, 19 dairy protein hydrolysates (DPH) were analysed by an electronic tongue and their physicochemical characteristics, the data obtained from these methods were correlated with their bitterness intensity as scored by a trained sensory panel and each model was also assessed by its predictive capabilities. The physiochemical characteristics of the DPHs investigated were degree of hydrolysis (DH%), and data relating to peptide size and relative hydrophobicity from size exclusion chromatography (SEC) and reverse phase (RP) HPLC. Partial least square regression (PLS) was used to construct the prediction models. All PLS regressions had good correlations (0.78 to 0.93) with the strongest being the combination of data obtained from SEC and RP HPLC. However, the PLS with the strongest predictive power was based on the e-tongue which had the PLS regression with the lowest root mean predicted residual error sum of squares (PRESS) in the study. The results show that the PLS models constructed with the e-tongue and the combination of SEC and RP-HPLC has potential to be used for prediction of bitterness and thus reducing the reliance on sensory analysis in DPHs for future food research. Copyright © 2014 Elsevier B.V. All rights reserved.
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.
2012-04-01
Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less
GY SAMPLING THEORY AND GEOSTATISTICS: ALTERNATE MODELS OF VARIABILITY IN CONTINUOUS MEDIA
In the sampling theory developed by Pierre Gy, sample variability is modeled as the sum of a set of seven discrete error components. The variogram used in geostatisties provides an alternate model in which several of Gy's error components are combined in a continuous mode...
On some labelings of triangular snake and central graph of triangular snake graph
NASA Astrophysics Data System (ADS)
Agasthi, P.; Parvathi, N.
2018-04-01
A Triangular snake Tn is obtained from a path u 1 u 2 … u n by joining ui and u i+1 to a new vertex wi for 1≤i≤n‑1. A Central graph of Triangular snake C(T n ) is obtained by subdividing each edge of Tn exactly once and joining all the non adjacent vertices of Tn . In this paper the ways to construct square sum, square difference, Root Mean square, strongly Multiplicative, Even Mean and Odd Mean labeling for Triangular Snake and Central graph of Triangular Snake graphs are reported.
Three filters for visualization of phase objects with large variations of phase gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sagan, Arkadiusz; Antosiewicz, Tomasz J.; Szoplik, Tomasz
2009-02-20
We propose three amplitude filters for visualization of phase objects. They interact with the spectra of pure-phase objects in the frequency plane and are based on tangent and error functions as well as antisymmetric combination of square roots. The error function is a normalized form of the Gaussian function. The antisymmetric square-root filter is composed of two square-root filters to widen its spatial frequency spectral range. Their advantage over other known amplitude frequency-domain filters, such as linear or square-root graded ones, is that they allow high-contrast visualization of objects with large variations of phase gradients.
Four-Digit Numbers Which Are Squared Sums
ERIC Educational Resources Information Center
Coughlin, Heather; Jue, Brian
2009-01-01
There is a very natural way to divide a four-digit number into 2 two-digit numbers. Applying an algorithm to this pair of numbers, determine how often the original four-digit number reappears. (Contains 3 tables.)
Reconstruction of loads in the fibrosa and ventricularis of porcine aortic valves.
Vesely, I
1996-01-01
The main structural components of aortic valve cusps, the fibrosa and ventricularis, are pre loaded by virtue of their attachment to each other. The fibrosa is under compression and the ventricularis is under tension. Once separated from each other, these internal stresses are relieved, and the fibrosa elongates and the ventricularis shrinks. It then becomes impossible to determine what fraction of the load is carried by the two layers at a given strain, using the standard superposition of tension vs strain curves. To enable the superposition approach, we needed to adjust the tension/ strain curves of the fibrosa and ventricularis, and duplicate the pre load that exists in these layers. We, therefore, iteratively shifted these curves and compared their arithmetic sum to the tension curve for the whole intact cusp, using a sum-of-squares error function. The best fits occurred when the fibrosa and ventricularis were shifted to the right and left by amounts corresponding to a true strain of epsilon = 0.26 and 0.10 for the fibrosa and ventricularis in the radial directions. In the circumferential direction, the best fit was achieved for shifts of epsilon = -0.11 and 0.010 for the fibrosa and ventricularis, respectively. This 26% compressive strain of the radial fibrosa compares well with direct observations. The reconstructed tension curves indicate that the ventricularis carries much of the radial loads, whereas circumferentially the two layers share loads equally up to 25% strain, beyond which the fibrosa takes over.
Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction.
Heintz, Brittany D; Keenan, Kevin G
2018-01-01
Dexterity impairments are well documented in older adults, though it is unclear how these influence touchscreen manipulation. This study examined age-related differences while tracing on high- and low-friction touchscreens using the finger or stylus. 26 young and 24 older adults completed an Archimedes spiral tracing task on a touchscreen mounted on a force sensor. Root mean square error was calculated to quantify performance. Root mean square error increased by 29.9% for older vs. young adults using the fingertip, but was similar to young adults when using the stylus. Although other variables (e.g., touchscreen usage, sensation, and reaction time) differed between age groups, these variables were not related to increased error in older adults while using their fingertip. Root mean square error also increased on the low-friction surface for all subjects. These findings suggest that utilizing a stylus and increasing surface friction may improve touchscreen use in older adults.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao
2018-02-01
Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.
A theoretical study of the electronic transition moment for the C2 Swan band system
NASA Technical Reports Server (NTRS)
Arnold, J. O.; Langhoff, S. R.
1978-01-01
Large-scale self-consistent-field plus configuration-interaction calculations have been performed for the a 3Pi u and d 3Pi g states of C2. The theoretical potential curves are in good agreement with those found by a Klein-Dunham analysis of measured molecular constants in terms of shape and excitation energy. The sum of the squares of the theoretical transition moments between the states at 2.44 bohr is 4.12 a.u. which agrees with the results of shock tube measurements. The variation in the sum of the squares of the theoretical moments with internuclear separation agrees with the values of Danylewych and Nicholls (1974). Based on the data for C2 and mother molecules, it is suggested that CI calculations using near Hartree-Fock quality Slater basis sets produce highly reliable transition moments.
NASA Astrophysics Data System (ADS)
Imani Masouleh, Mehdi; Limebeer, David J. N.
2018-07-01
In this study we will estimate the region of attraction (RoA) of the lateral dynamics of a nonlinear single-track vehicle model. The tyre forces are approximated using rational functions that are shown to capture the nonlinearities of tyre curves significantly better than polynomial functions. An existing sum-of-squares (SOS) programming algorithm for estimating regions of attraction is extended to accommodate the use of rational vector fields. This algorithm is then used to find an estimate of the RoA of the vehicle lateral dynamics. The influence of vehicle parameters and driving conditions on the stability region are studied. It is shown that SOS programming techniques can be used to approximate the stability region without resorting to numerical integration. The RoA estimate from the SOS algorithm is compared to the existing results in the literature. The proposed method is shown to obtain significantly better RoA estimates.
NASA Astrophysics Data System (ADS)
Gassara, H.; El Hajjaji, A.; Chaabane, M.
2017-07-01
This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.
Hilbert's 17th Problem and the Quantumness of States
NASA Astrophysics Data System (ADS)
Korbicz, J. K.; Cirac, J. I.; Wehr, Jan; Lewenstein, M.
2005-04-01
A state of a quantum system can be regarded as classical (quantum) with respect to measurements of a set of canonical observables if and only if there exists (does not exist) a well defined, positive phase-space distribution, the so called Glauber-Sudarshan P representation. We derive a family of classicality criteria that requires that the averages of positive functions calculated using P representation must be positive. For polynomial functions, these criteria are related to Hilbert’s 17th problem, and have physical meaning of generalized squeezing conditions; alternatively, they may be interpreted as nonclassicality witnesses. We show that every generic nonclassical state can be detected by a polynomial that is a sum-of-squares of other polynomials. We introduce a very natural hierarchy of states regarding their degree of quantumness, which we relate to the minimal degree of a sum-of-squares polynomial that detects them.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Some Results on Mean Square Error for Factor Score Prediction
ERIC Educational Resources Information Center
Krijnen, Wim P.
2006-01-01
For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…
Weighted linear regression using D2H and D2 as the independent variables
Hans T. Schreuder; Michael S. Williams
1998-01-01
Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...
ERIC Educational Resources Information Center
Savalei, Victoria
2012-01-01
The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research.
ERIC Educational Resources Information Center
Levine, Timothy R.; Hullett, Craig R.
2002-01-01
Alerts communication researchers to potential errors stemming from the use of SPSS (Statistical Package for the Social Sciences) to obtain estimates of eta squared in analysis of variance (ANOVA). Strives to clarify issues concerning the development and appropriate use of eta squared and partial eta squared in ANOVA. Discusses the reporting of…
Freye, Chris E; Fitz, Brian D; Billingsley, Matthew C; Synovec, Robert E
2016-06-01
The chemical composition and several physical properties of RP-1 fuels were studied using comprehensive two-dimensional (2D) gas chromatography (GC×GC) coupled with flame ionization detection (FID). A "reversed column" GC×GC configuration was implemented with a RTX-wax column on the first dimension ((1)D), and a RTX-1 as the second dimension ((2)D). Modulation was achieved using a high temperature diaphragm valve mounted directly in the oven. Using leave-one-out cross-validation (LOOCV), the summed GC×GC-FID signal of three compound-class selective 2D regions (alkanes, cycloalkanes, and aromatics) was regressed against previously measured ASTM derived values for these compound classes, yielding root mean square errors of cross validation (RMSECV) of 0.855, 0.734, and 0.530mass%, respectively. For comparison, using partial least squares (PLS) analysis with LOOCV, the GC×GC-FID signal of the entire 2D separations was regressed against the same ASTM values, yielding a linear trend for the three compound classes (alkanes, cycloalkanes, and aromatics), yielding RMSECV values of 1.52, 2.76, and 0.945 mass%, respectively. Additionally, a more detailed PLS analysis was undertaken of the compounds classes (n-alkanes, iso-alkanes, mono-, di-, and tri-cycloalkanes, and aromatics), and of physical properties previously determined by ASTM methods (such as net heat of combustion, hydrogen content, density, kinematic viscosity, sustained boiling temperature and vapor rise temperature). Results from these PLS studies using the relatively simple to use and inexpensive GC×GC-FID instrumental platform are compared to previously reported results using the GC×GC-TOFMS instrumental platform. Copyright © 2016 Elsevier B.V. All rights reserved.
Chen, Baisheng; Wu, Huanan; Li, Sam Fong Yau
2014-03-01
To overcome the challenging task to select an appropriate pathlength for wastewater chemical oxygen demand (COD) monitoring with high accuracy by UV-vis spectroscopy in wastewater treatment process, a variable pathlength approach combined with partial-least squares regression (PLSR) was developed in this study. Two new strategies were proposed to extract relevant information of UV-vis spectral data from variable pathlength measurements. The first strategy was by data fusion with two data fusion levels: low-level data fusion (LLDF) and mid-level data fusion (MLDF). Predictive accuracy was found to improve, indicated by the lower root-mean-square errors of prediction (RMSEP) compared with those obtained for single pathlength measurements. Both fusion levels were found to deliver very robust PLSR models with residual predictive deviations (RPD) greater than 3 (i.e. 3.22 and 3.29, respectively). The second strategy involved calculating the slopes of absorbance against pathlength at each wavelength to generate slope-derived spectra. Without the requirement to select the optimal pathlength, the predictive accuracy (RMSEP) was improved by 20-43% as compared to single pathlength spectroscopy. Comparing to nine-factor models from fusion strategy, the PLSR model from slope-derived spectroscopy was found to be more parsimonious with only five factors and more robust with residual predictive deviation (RPD) of 3.72. It also offered excellent correlation of predicted and measured COD values with R(2) of 0.936. In sum, variable pathlength spectroscopy with the two proposed data analysis strategies proved to be successful in enhancing prediction performance of COD in wastewater and showed high potential to be applied in on-line water quality monitoring. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Sui-xian; Chen, Haiyang; Sun, Min; Cheng, Zaijun
2009-11-01
Aimed at improving the calculation accuracy when calculating the energy deposition of electrons traveling in solids, a method we call optimal subdivision number searching algorithm is proposed. When treating the energy deposition of electrons traveling in solids, large calculation errors are found, we are conscious of that it is the result of dividing and summing when calculating the integral. Based on the results of former research, we propose a further subdividing and summing method. For β particles with the energy in the entire spectrum span, the energy data is set only to be the integral multiple of keV, and the subdivision number is set to be from 1 to 30, then the energy deposition calculation error collections are obtained. Searching for the minimum error in the collections, we can obtain the corresponding energy and subdivision number pairs, as well as the optimal subdivision number. The method is carried out in four kinds of solid materials, Al, Si, Ni and Au to calculate energy deposition. The result shows that the calculation error is reduced by one order with the improved algorithm.
Ewald method for polytropic potentials in arbitrary dimensionality
NASA Astrophysics Data System (ADS)
Osychenko, O. N.; Astrakharchik, G. E.; Boronat, J.
2012-02-01
The Ewald summation technique is generalized to power-law 1/| r | k potentials in three-, two- and one-dimensional geometries with explicit formulae for all the components of the sums. The cases of short-range, long-range and 'marginal' interactions are treated separately. The jellium model, as a particular case of a charge-neutral system, is discussed and the explicit forms of the Ewald sums for such a system are presented. A generalized form of the Ewald sums for a non-cubic (non-square) simulation cell for three- (two-) dimensional geometry is obtained and its possible field of application is discussed. A procedure for the optimization of the involved parameters in actual simulations is developed and an example of its application is presented.
Xanthium strumarium L. seed hull as a zero cost alternative for Rhodamine B dye removal.
Khamparia, Shraddha; Jaspal, Dipika Kaur
2017-07-15
Treatment of polluted water has been considered as one of the most important aspects in environmental sciences. Present study explores the decolorization potential of a low cost natural adsorbent Xanthium strumarium L. seed hull for the adsorption of a toxic xanthene dye, Rhodamine B (RHB). The characterization of the adsorbent revealed the presence of high amount of carbon, when exposed to Electron Dispersive Spectroscopy (EDS). Further appreciable decolorization took place which was confirmed by Fourier Transform Infrared Spectroscopy (FTIR) analysis noticing shift in peaks. Isothermal studies indicated multilayer adsorption following Freundlich isotherm. The rate of adsorption was supported by second order kinetics directing a chemical phenomenon during the process with dominance of film diffusion as the rate governing step. Moreover paper aims at correlating the chemical arena to the mathematical aspect providing an in-depth information of the studied treatment process. For proper assessment and validation of the observed data, experimental data has been statistically treated by applying different error functions namely, Chi-square test (χ 2 ), Sum of absolute errors (EABS) and Normalized standard deviation (NSD). Further practical applicability of the low cost adsorbent was evaluated by continuous column mode studies with 72.2% of dye recovery. Xanthium strumarium L. proved to be environment friendly low cost natural adsorbent for decolorizing RHB from aquatic system. Copyright © 2017 Elsevier Ltd. All rights reserved.
Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho
2018-05-11
Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.
DWI filtering using joint information for DTI and HARDI.
Tristán-Vega, Antonio; Aja-Fernández, Santiago
2010-04-01
The filtering of the Diffusion Weighted Images (DWI) prior to the estimation of the diffusion tensor or other fiber Orientation Distribution Functions (ODF) has been proved to be of paramount importance in the recent literature. More precisely, it has been evidenced that the estimation of the diffusion tensor without a previous filtering stage induces errors which cannot be recovered by further regularization of the tensor field. A number of approaches have been intended to overcome this problem, most of them based on the restoration of each DWI gradient image separately. In this paper we propose a methodology to take advantage of the joint information in the DWI volumes, i.e., the sum of the information given by all DWI channels plus the correlations between them. This way, all the gradient images are filtered together exploiting the first and second order information they share. We adapt this methodology to two filters, namely the Linear Minimum Mean Squared Error (LMMSE) and the Unbiased Non-Local Means (UNLM). These new filters are tested over a wide variety of synthetic and real data showing the convenience of the new approach, especially for High Angular Resolution Diffusion Imaging (HARDI). Among the techniques presented, the joint LMMSE is proved a very attractive approach, since it shows an accuracy similar to UNLM (or even better in some situations) with a much lighter computational load. Copyright 2009 Elsevier B.V. All rights reserved.
Liao, Congyu; Chen, Ying; Cao, Xiaozhi; Chen, Song; He, Hongjian; Mani, Merry; Jacob, Mathews; Magnotta, Vincent; Zhong, Jianhui
2017-03-01
To propose a novel reconstruction method using parallel imaging with low rank constraint to accelerate high resolution multishot spiral diffusion imaging. The undersampled high resolution diffusion data were reconstructed based on a low rank (LR) constraint using similarities between the data of different interleaves from a multishot spiral acquisition. The self-navigated phase compensation using the low resolution phase data in the center of k-space was applied to correct shot-to-shot phase variations induced by motion artifacts. The low rank reconstruction was combined with sensitivity encoding (SENSE) for further acceleration. The efficiency of the proposed joint reconstruction framework, dubbed LR-SENSE, was evaluated through error quantifications and compared with ℓ1 regularized compressed sensing method and conventional iterative SENSE method using the same datasets. It was shown that with a same acceleration factor, the proposed LR-SENSE method had the smallest normalized sum-of-squares errors among all the compared methods in all diffusion weighted images and DTI-derived index maps, when evaluated with different acceleration factors (R = 2, 3, 4) and for all the acquired diffusion directions. Robust high resolution diffusion weighted image can be efficiently reconstructed from highly undersampled multishot spiral data with the proposed LR-SENSE method. Magn Reson Med 77:1359-1366, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
A wearable respiratory monitoring device--the between-days variability of calibration.
Heyde, C; Mahler, H; Roecker, K; Gollhofer, A
2015-01-01
The between-days variability in ascertained gain factors for calibration of a wearable respiratory inductance plethysmograph (RIP) and validity thereof for the repeated use during exercise were examined. Consecutive 5-min periods of standing still, slow running at 8 km·h(-1), fast running at 14 km·h(-1) (male) or 12 km·h(-1) (female) and recovery were repeated by 10 healthy subjects on 5 days. Breath-by-breath data were recorded simultaneously by flow meter and RIP. Gain factors were determined individually for each trial (CALIND) via least square regression. Reliability and variability in gain factors were quantified respectively by intraclass correlation coefficients (ICC) and limits of agreement. Within a predefined error range of ±20% the amount of RIP-derived tidal volumes after CALIND was compared to corresponding amounts when gain factors of the first trial were applied on the following 4 trials (CALFIRST). ICC ranged within 0.96 and 0.98. The variability in gain factors (up to ± 24.06%) was reduced compensatively by their sum. Amounts of breaths within the predefined error range did not differ between CALIND and (CALFIRST) (P>0.32). The between-days variability of gain factors for a wearable RIP-device does not show impaired reliability in further derived tidal volumes. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
Lim, Jun-Seok; Pang, Hee-Suk
2016-01-01
In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.
An algorithm for propagating the square-root covariance matrix in triangular form
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1976-01-01
A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Error floor behavior study of LDPC codes for concatenated codes design
NASA Astrophysics Data System (ADS)
Chen, Weigang; Yin, Liuguo; Lu, Jianhua
2007-11-01
Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.
Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan
2016-01-01
Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
Least-squares model-based halftoning
NASA Astrophysics Data System (ADS)
Pappas, Thrasyvoulos N.; Neuhoff, David L.
1992-08-01
A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.
McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.
2016-01-01
The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821
NASA Astrophysics Data System (ADS)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Psychometric Evaluation of the Brachial Assessment Tool Part 1: Reproducibility.
Hill, Bridget; Williams, Gavin; Olver, John; Ferris, Scott; Bialocerkowski, Andrea
2018-04-01
To evaluate reproducibility (reliability and agreement) of the Brachial Assessment Tool (BrAT), a new patient-reported outcome measure for adults with traumatic brachial plexus injury (BPI). Prospective repeated-measure design. Outpatient clinics. Adults with confirmed traumatic BPI (N=43; age range, 19-82y). People with BPI completed the 31-item 4-response BrAT twice, 2 weeks apart. Results for the 3 subscales and summed score were compared at time 1 and time 2 to determine reliability, including systematic differences using paired t tests, test retest using intraclass correlation coefficient model 1,1 (ICC 1,1 ), and internal consistency using Cronbach α. Agreement parameters included standard error of measurement, minimal detectable change, and limits of agreement. BrAT. Test-retest reliability was excellent (ICC 1,1 =.90-.97). Internal consistency was high (Cronbach α=.90-.98). Measurement error was relatively low (standard error of measurement range, 3.1-8.8). A change of >4 for subscale 1, >6 for subscale 2, >4 for subscale 3, and >10 for the summed score is indicative of change over and above measurement error. Limits of agreement ranged from ±4.4 (subscale 3) to 11.61 (summed score). These findings support the use of the BrAT as a reproducible patient-reported outcome measure for adults with traumatic BPI with evidence of appropriate reliability and agreement for both individual and group comparisons. Further psychometric testing is required to establish the construct validity and responsiveness of the BrAT. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
ERIC Educational Resources Information Center
Li, Libo; Bentler, Peter M.
2011-01-01
MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…
[Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].
Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling
2013-12-01
Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.
Medina, K.D.; Tasker, Gary D.
1987-01-01
This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.
Round-off error in long-term orbital integrations using multistep methods
NASA Technical Reports Server (NTRS)
Quinlan, Gerald D.
1994-01-01
Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.
Wheel speed management control system for spacecraft
NASA Technical Reports Server (NTRS)
Goodzeit, Neil E. (Inventor); Linder, David M. (Inventor)
1991-01-01
A spacecraft attitude control system uses at least four reaction wheels. In order to minimize reaction wheel speed and therefore power, a wheel speed management system is provided. The management system monitors the wheel speeds and generates a wheel speed error vector. The error vector is integrated, and the error vector and its integral are combined to form a correction vector. The correction vector is summed with the attitude control torque command signals for driving the reaction wheels.
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Motkova, A. V.
2018-01-01
A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2015-09-01
Mass change over Greenland can be caused by either changes in the glacial dynamic mass balance (DMB) or the surface mass balance (SMB). The GRACE satellite gravity mission cannot directly separate the two physical causes because it measures the sum of the entire mass column with limited spatial resolution. We demonstrate one theoretical way to indirectly separate cumulative SMB from DMB with GRACE, using a least squares inversion technique with knowledge of the location of the glaciers. However, we find that the limited 60 × 60 spherical harmonic representation of current GRACE data does not provide sufficient resolution to adequately accomplish the task. We determine that at a maximum degree/order of 90 × 90 or above, a noise-free gravity measurement could theoretically separate the SMB from DMB signals. However, current GRACE satellite errors are too large at present to separate the signals. A noise reduction of a factor of 10 at a resolution of 90 × 90 would provide the accuracy needed for the interannual cumulative SMB and DMB to be accurately separated.
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2015-02-01
Mass change over Greenland can be caused by either changes in the glacial mass balance (GMB) or the precipitation-based surface mass balance (SMB). The GRACE satellite gravity mission cannot directly separate the two physical causes because it measures the sum of the entire mass column with limited spatial resolution. We demonstrate one theoretical way to indirectly separate SMB from GMB with GRACE, using a least squares inversion technique with knowledge of the location of the glacier. However, we find that the limited 60 × 60 spherical harmonic representation of current GRACE data does not provide sufficient resolution to adequately accomplish the task. We determine that at a maximum degree/order of 90 × 90 or above, a noise-free gravity measurement could theoretically separate the SMB from GMB signals. However, current GRACE satellite errors are too large at present to separate the signals. A noise reduction of a factor of 9 at a resolution of 90 × 90 would provide the accuracy needed for the interannual SMB and GMB to be accurately separated.
Optimizing the Determination of Roughness Parameters for Model Urban Canopies
NASA Astrophysics Data System (ADS)
Huq, Pablo; Rahman, Auvi
2018-05-01
We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.
NASA Technical Reports Server (NTRS)
Hancock, P. A.; Robinson, M. A.
1989-01-01
The present experiment examined the influence of several task-related factors on tracking performance and concomitant workload. The manipulated factors included tracking order, the presence or absence of knowledge of performance, and the control device. Summed root mean square error (rmse) and perceived workload were measured at the termination of each trial. Perceived workload was measured using the NASA Task Load Index (TLX) and the Subjective Workload Assessment Technique (SWAT). Results indicated a large and expected effect for track order on both performance and the perception of load. In general, trackball input was more accurate and judged for lower load than input using a mouse. The presence or absence of knowledge of performance had little effect on either performance or workload. There were a number of interactions between factors shown in performance that were mirrored by perceived workload scores. Results from each workload scale were equivalent in terms of sensitivity to task manipulations. The pattern of results affirm the utility of these workload measures in assessing the imposed load of multiple task-related variables.
Atmospheric gradients from very long baseline interferometry observations
NASA Technical Reports Server (NTRS)
Macmillan, D. S.
1995-01-01
Azimuthal asymmetries in the atmospheric refractive index can lead to errors in estimated vertical and horizontal station coordinates. Daily average gradient effects can be as large as 50 mm of delay at a 7 deg elevation. To model gradients, the constrained estimation of gradient paramters was added to the standard VLBI solution procedure. Here the analysis of two sets of data is summarized: the set of all geodetic VLBI experiments from 1990-1993 and a series of 12 state-of-the-art R&D experiments run on consecutive days in January 1994. In both cases, when the gradient parameters are estimated, the overall fit of the geodetic solution is improved at greater than the 99% confidence level. Repeatabilities of baseline lengths ranging up to 11,000 km are improved by 1 to 8 mm in a root-sum-square sense. This varies from about 20% to 40% of the total baseline length scatter without gradient modeling for the 1990-1993 series and 40% to 50% for the January series. Gradients estimated independently for each day as a piecewise linear function are mostly continuous from day to day within their formal uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behnia, Pouran
2007-06-15
The metallogeny of Central Iran is characterized mainly by the presence of several iron, apatite, and uranium deposits of Proterozoic age. Radial Basis Function Link Networks (RBFLN) were used as a data-driven method for GIS-based predictive mapping of Proterozoic mineralization in this area. To generate the input data for RBFLN, the evidential maps comprising stratigraphic, structural, geophysical, and geochemical data were used. Fifty-eight deposits and 58 'nondeposits' were used to train the network. The operations for the application of neural networks employed in this study involve both multiclass and binary representation of evidential maps. Running RBFLN on different input datamore » showed that an increase in the number of evidential maps and classes leads to a larger classification sum of squared error (SSE). As a whole, an increase in the number of iterations resulted in the improvement of training SSE. The results of applying RBFLN showed that a successful classification depends on the existence of spatially well distributed deposits and nondeposits throughout the study area.« less
Vaknin, David; Bu, Wei; Travesset, Alex
2008-07-28
We show that the structure factor S(q) of water can be obtained from x-ray synchrotron experiments at grazing angle of incidence (in reflection mode) by using a liquid surface diffractometer. The corrections used to obtain S(q) self-consistently are described. Applying these corrections to scans at different incident beam angles (above the critical angle) collapses the measured intensities into a single master curve, without fitting parameters, which within a scale factor yields S(q). Performing the measurements below the critical angle for total reflectivity yields the structure factor of the top most layers of the water/vapor interface. Our results indicate water restructuring at the vapor/water interface. We also introduce a new approach to extract g(r), the pair distribution function (PDF), by expressing the PDF as a linear sum of error functions whose parameters are refined by applying a nonlinear least square fit method. This approach enables a straightforward determination of the inherent uncertainties in the PDF. Implications of our results to previously measured and theoretical predictions of the PDF are also discussed.
NASA Astrophysics Data System (ADS)
Sammouda, Rachid; Niki, Noboru; Nishitani, Hiroshi; Nakamura, S.; Mori, Shinichiro
1997-04-01
The paper presents a method for automatic segmentation of sputum cells with color images, to develop an efficient algorithm for lung cancer diagnosis based on a Hopfield neural network. We formulate the segmentation problem as a minimization of an energy function constructed with two terms, the cost-term as a sum of squared errors, and the second term a temporary noise added to the network as an excitation to escape certain local minima with the result of being closer to the global minimum. To increase the accuracy in segmenting the regions of interest, a preclassification technique is used to extract the sputum cell regions within the color image and remove those of the debris cells. The former is then given with the raw image to the input of Hopfield neural network to make a crisp segmentation by assigning each pixel to label such as background, cytoplasm, and nucleus. The proposed technique has yielded correct segmentation of complex scene of sputum prepared by ordinary manual staining method in most of the tested images selected from our database containing thousands of sputum color images.
3D High Resolution Mesh Deformation Based on Multi Library Wavelet Neural Network Architecture
NASA Astrophysics Data System (ADS)
Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Amar, Chokri Ben
2016-12-01
This paper deals with the features of a novel technique for large Laplacian boundary deformations using estimated rotations. The proposed method is based on a Multi Library Wavelet Neural Network structure founded on several mother wavelet families (MLWNN). The objective is to align features of mesh and minimize distortion with a fixed feature that minimizes the sum of the distances between all corresponding vertices. New mesh deformation method worked in the domain of Region of Interest (ROI). Our approach computes deformed ROI, updates and optimizes it to align features of mesh based on MLWNN and spherical parameterization configuration. This structure has the advantage of constructing the network by several mother wavelets to solve high dimensions problem using the best wavelet mother that models the signal better. The simulation test achieved the robustness and speed considerations when developing deformation methodologies. The Mean-Square Error and the ratio of deformation are low compared to other works from the state of the art. Our approach minimizes distortions with fixed features to have a well reconstructed object.
Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions
NASA Astrophysics Data System (ADS)
McCullough, Christopher; Bettadpur, Srinivas
2015-04-01
In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.
NASA Astrophysics Data System (ADS)
Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae
2017-06-01
The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.
NASA Astrophysics Data System (ADS)
Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong
2018-06-01
An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.
Intrinsic Raman spectroscopy for quantitative biological spectroscopy Part II
Bechtel, Kate L.; Shih, Wei-Chuan; Feld, Michael S.
2009-01-01
We demonstrate the effectiveness of intrinsic Raman spectroscopy (IRS) at reducing errors caused by absorption and scattering. Physical tissue models, solutions of varying absorption and scattering coefficients with known concentrations of Raman scatterers, are studied. We show significant improvement in prediction error by implementing IRS to predict concentrations of Raman scatterers using both ordinary least squares regression (OLS) and partial least squares regression (PLS). In particular, we show that IRS provides a robust calibration model that does not increase in error when applied to samples with optical properties outside the range of calibration. PMID:18711512
Huang, Yu; Griffin, Michael J
2014-01-01
This study investigated the prediction of the discomfort caused by simultaneous noise and vibration from the discomfort caused by noise and the discomfort caused by vibration when they are presented separately. A total of 24 subjects used absolute magnitude estimation to report their discomfort caused by seven levels of noise (70-88 dBA SEL), 7 magnitudes of vibration (0.146-2.318 ms(- 1.75)) and all 49 possible combinations of these noise and vibration stimuli. Vibration did not significantly influence judgements of noise discomfort, but noise reduced vibration discomfort by an amount that increased with increasing noise level, consistent with a 'masking effect' of noise on judgements of vibration discomfort. A multiple linear regression model or a root-sums-of-squares model predicted the discomfort caused by combined noise and vibration, but the root-sums-of-squares model is more convenient and provided a more accurate prediction of the discomfort produced by combined noise and vibration.
Tóth, Gergely; Bodai, Zsolt; Héberger, Károly
2013-10-01
Coefficient of determination (R (2)) and its leave-one-out cross-validated analogue (denoted by Q (2) or R cv (2) ) are the most frequantly published values to characterize the predictive performance of models. In this article we use R (2) and Q (2) in a reversed aspect to determine uncommon points, i.e. influential points in any data sets. The term (1 - Q (2))/(1 - R (2)) corresponds to the ratio of predictive residual sum of squares and the residual sum of squares. The ratio correlates to the number of influential points in experimental and random data sets. We propose an (approximate) F test on (1 - Q (2))/(1 - R (2)) term to quickly pre-estimate the presence of influential points in training sets of models. The test is founded upon the routinely calculated Q (2) and R (2) values and warns the model builders to verify the training set, to perform influence analysis or even to change to robust modeling.
Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory
NASA Astrophysics Data System (ADS)
Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.
2017-03-01
We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.
ERIC Educational Resources Information Center
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.
2018-01-01
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
The algorithm study for using the back propagation neural network in CT image segmentation
NASA Astrophysics Data System (ADS)
Zhang, Peng; Liu, Jie; Chen, Chen; Li, Ying Qi
2017-01-01
Back propagation neural network(BP neural network) is a type of multi-layer feed forward network which spread positively, while the error spread backwardly. Since BP network has advantages in learning and storing the mapping between a large number of input and output layers without complex mathematical equations to describe the mapping relationship, it is most widely used. BP can iteratively compute the weight coefficients and thresholds of the network based on the training and back propagation of samples, which can minimize the error sum of squares of the network. Since the boundary of the computed tomography (CT) heart images is usually discontinuous, and it exist large changes in the volume and boundary of heart images, The conventional segmentation such as region growing and watershed algorithm can't achieve satisfactory results. Meanwhile, there are large differences between the diastolic and systolic images. The conventional methods can't accurately classify the two cases. In this paper, we introduced BP to handle the segmentation of heart images. We segmented a large amount of CT images artificially to obtain the samples, and the BP network was trained based on these samples. To acquire the appropriate BP network for the segmentation of heart images, we normalized the heart images, and extract the gray-level information of the heart. Then the boundary of the images was input into the network to compare the differences between the theoretical output and the actual output, and we reinput the errors into the BP network to modify the weight coefficients of layers. Through a large amount of training, the BP network tend to be stable, and the weight coefficients of layers can be determined, which means the relationship between the CT images and the boundary of heart.
Evaluation of acidity estimation methods for mine drainage, Pennsylvania, USA.
Park, Daeryong; Park, Byungtae; Mendinsky, Justin J; Paksuchon, Benjaphon; Suhataikul, Ratda; Dempsey, Brian A; Cho, Yunchul
2015-01-01
Eighteen sites impacted by abandoned mine drainage (AMD) in Pennsylvania were sampled and measured for pH, acidity, alkalinity, metal ions, and sulfate. This study compared the accuracy of four acidity calculation methods with measured hot peroxide acidity and identified the most accurate calculation method for each site as a function of pH and sulfate concentration. Method E1 was the sum of proton and acidity based on total metal concentrations; method E2 added alkalinity; method E3 also accounted for aluminum speciation and temperature effects; and method E4 accounted for sulfate speciation. To evaluate errors between measured and predicted acidity, the Nash-Sutcliffe efficiency (NSE), the coefficient of determination (R (2)), and the root mean square error to standard deviation ratio (RSR) methods were applied. The error evaluation results show that E1, E2, E3, and E4 sites were most accurate at 0, 9, 4, and 5 of the sites, respectively. Sites where E2 was most accurate had pH greater than 4.0 and less than 400 mg/L of sulfate. Sites where E3 was most accurate had pH greater than 4.0 and sulfate greater than 400 mg/L with two exceptions. Sites where E4 was most accurate had pH less than 4.0 and more than 400 mg/L sulfate with one exception. The results indicate that acidity in AMD-affected streams can be accurately predicted by using pH, alkalinity, sulfate, Fe(II), Mn(II), and Al(III) concentrations in one or more of the identified equations, and that the appropriate equation for prediction can be selected based on pH and sulfate concentration.
An information geometric approach to least squares minimization
NASA Astrophysics Data System (ADS)
Transtrum, Mark; Machta, Benjamin; Sethna, James
2009-03-01
Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Lee, Dong-Chang; Olson, John V; Szuberla, Curt A L
2013-07-01
This work reports on a performance study of two numerical detectors that are particularly useful for infrasound arrays operating under windy conditions. The sum of squares of variance ratios (SSVR1)-proposed for detecting signals with frequency ranging from 1 to 10 Hz-is computed by taking the ratio of the squared sum of eigenvalues to the square of largest eigenvalue of the covariance matrix of the power spectrum. For signals with lower frequency between 0.015 and 0.1 Hz, SSVR2 is developed to reduce the detector's sensitivity to noise. The detectors' performances are graphically compared against the current method, the mean of cross correlation maxima (MCCM), using the receiver operating characteristics curves and three types of atmospheric infrasound, corrupted by Gaussian and Pink noise. The MCCM and SSVR2 detectors were also used to detect microbaroms from the 24 h-long infrasound data. It was found that the two detectors outperform the MCCM detector in both sensitivity and computational efficiency. For mine blasts corrupted by Pink noise (signal-to-noise ratio = -7 dB), the MCCM and SSVR1 detectors yield 62 and 88 % true positives when accepting 20% false positives. For an eight-sensor array, the speed gain is approximately eleven-fold for a 50 s long signal.
Synthesis and optimization of four bar mechanism with six design parameters
NASA Astrophysics Data System (ADS)
Jaiswal, Ankur; Jawale, H. P.
2018-04-01
Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.
Theoretical and experimental studies of error in square-law detector circuits
NASA Technical Reports Server (NTRS)
Stanley, W. D.; Hearn, C. P.; Williams, J. B.
1984-01-01
Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.
Monte Carlo errors with less errors
NASA Astrophysics Data System (ADS)
Wolff, Ulli; Alpha Collaboration
2004-01-01
We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
The sumLINK statistic for genetic linkage analysis in the presence of heterogeneity.
Christensen, G B; Knight, S; Camp, N J
2009-11-01
We present the "sumLINK" statistic--the sum of multipoint LOD scores for the subset of pedigrees with nominally significant linkage evidence at a given locus--as an alternative to common methods to identify susceptibility loci in the presence of heterogeneity. We also suggest the "sumLOD" statistic (the sum of positive multipoint LOD scores) as a companion to the sumLINK. sumLINK analysis identifies genetic regions of extreme consistency across pedigrees without regard to negative evidence from unlinked or uninformative pedigrees. Significance is determined by an innovative permutation procedure based on genome shuffling that randomizes linkage information across pedigrees. This procedure for generating the empirical null distribution may be useful for other linkage-based statistics as well. Using 500 genome-wide analyses of simulated null data, we show that the genome shuffling procedure results in the correct type 1 error rates for both the sumLINK and sumLOD. The power of the statistics was tested using 100 sets of simulated genome-wide data from the alternative hypothesis from GAW13. Finally, we illustrate the statistics in an analysis of 190 aggressive prostate cancer pedigrees from the International Consortium for Prostate Cancer Genetics, where we identified a new susceptibility locus. We propose that the sumLINK and sumLOD are ideal for collaborative projects and meta-analyses, as they do not require any sharing of identifiable data between contributing institutions. Further, loci identified with the sumLINK have good potential for gene localization via statistical recombinant mapping, as, by definition, several linked pedigrees contribute to each peak.
Quantifying Proxy Influence in the Last Millennium Reanalysis
NASA Astrophysics Data System (ADS)
Hakim, G. J.; Anderson, D. N.; Emile-Geay, J.; Noone, D.; Tardif, R.
2017-12-01
We examine the influence of proxies in the climate field reconstruction known as the Last Millennium Reanalysis (Hakim et al. 2016; JGR-A). This data assimilation framework uses the CCSM4 Last Millennium simulation as an agnostic prior, proxies from the PAGES 2k Consortium (2017; Sci. Data), and an offline ensemble square-root filter for assimilation. Proxies are forward modeled using an observation model ("proxy system model") that maps from the prior space to the proxy space. We assess proxy impact using the method of Cardinali et al. (2004; QJRMS), where influence is measured in observation space; that is, at the location of observations. Influence is determined by three components: the prior at the location, the proxy at the location, and remote proxies as mediated by the spatial covariance information in the prior. Consequently, on a per-proxy basis, influence is higher for spatially isolated proxies having small error, and influence is lower for spatially dense proxies having large error. Results show that proxy influence depends strongly on the observation model. Assuming the proxies depend linearly on annual mean temperature yields the largest per-proxy influence for coral d18O and coral Sr/Ca records, and smallest influence for tree-ring width. On a global basis (summing over all proxies of a given type), tree-ring width and coral d18O have the largest influence. A seasonal model for the proxies yields very different results. In this case we model the proxies linearly on objectively determined seasonal temperature, except for tree proxies, which are fit to a bivariate model on seasonal temperature and precipitation. In this experiment, on a per-proxy basis, tree-ring density has by far the greatest influence. Total proxy influence is dominated by tree-ring width followed by tree-ring density. Compared to the results for the annual-mean observation model, the experiment where proxies are measured seasonally has more than double the total influence (sum over all proxies); this experiment also has higher verification scores when measured against other 20th century temperature reconstructions. These results underscore the importance of improving proxy system models, since they increase the amount of information available for data-assimilation-based reconstructions.
Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy
Cohen, E. A. K.; Ober, R. J.
2014-01-01
We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573
Cooperative optimization and their application in LDPC codes
NASA Astrophysics Data System (ADS)
Chen, Ke; Rong, Jian; Zhong, Xiaochun
2008-10-01
Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
Type I Error Inflation in DIF Identification with Mantel-Haenszel: An Explanation and a Solution
ERIC Educational Resources Information Center
Magis, David; De Boeck, Paul
2014-01-01
It is known that sum score-based methods for the identification of differential item functioning (DIF), such as the Mantel-Haenszel (MH) approach, can be affected by Type I error inflation in the absence of any DIF effect. This may happen when the items differ in discrimination and when there is item impact. On the other hand, outlier DIF methods…
Cost-Benefit Analysis of Computer Resources for Machine Learning
Champion, Richard A.
2007-01-01
Machine learning describes pattern-recognition algorithms - in this case, probabilistic neural networks (PNNs). These can be computationally intensive, in part because of the nonlinear optimizer, a numerical process that calibrates the PNN by minimizing a sum of squared errors. This report suggests efficiencies that are expressed as cost and benefit. The cost is computer time needed to calibrate the PNN, and the benefit is goodness-of-fit, how well the PNN learns the pattern in the data. There may be a point of diminishing returns where a further expenditure of computer resources does not produce additional benefits. Sampling is suggested as a cost-reduction strategy. One consideration is how many points to select for calibration and another is the geometric distribution of the points. The data points may be nonuniformly distributed across space, so that sampling at some locations provides additional benefit while sampling at other locations does not. A stratified sampling strategy can be designed to select more points in regions where they reduce the calibration error and fewer points in regions where they do not. Goodness-of-fit tests ensure that the sampling does not introduce bias. This approach is illustrated by statistical experiments for computing correlations between measures of roadless area and population density for the San Francisco Bay Area. The alternative to training efficiencies is to rely on high-performance computer systems. These may require specialized programming and algorithms that are optimized for parallel performance.
Factor Analysis for Clustered Observations.
ERIC Educational Resources Information Center
Longford, N. T.; Muthen, B. O.
1992-01-01
A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)
Programmable Logic Application Notes
NASA Technical Reports Server (NTRS)
Katz, Richard; Day, John H. (Technical Monitor)
2001-01-01
This report will be provided each quarter as a source for reliability, radiation results, NASA capabilities, and other information on programmable logic devices and related applications. This quarter will continue a series of notes concentrating on analysis techniques with this issue's section discussing the use of Root-Sum-Square calculations for digital delays.
Clustering "N" Objects into "K" Groups under Optimal Scaling of Variables.
ERIC Educational Resources Information Center
van Buuren, Stef; Heiser, Willem J.
1989-01-01
A method based on homogeneity analysis (multiple correspondence analysis or multiple scaling) is proposed to reduce many categorical variables to one variable with "k" categories. The method is a generalization of the sum of squared distances cluster analysis problem to the case of mixed measurement level variables. (SLD)
Quadratic Expressions by Means of "Summing All the Matchsticks"
ERIC Educational Resources Information Center
Gierdien, M. Faaiz
2012-01-01
This note presents demonstrations of quadratic expressions that come about when particular problems are posed with respect to matchsticks that form regular triangles, squares, pentagons and so on. Usually when such "matchstick" problems are used as ways to foster algebraic thinking, the expressions for the number of matchstick quantities are…
A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong
2001-01-01
This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.
Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej
2015-01-01
The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
Liu, C.; Liu, J.; Yao, Y. X.; ...
2017-01-16
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, J.; Yao, Y. X.
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
NASA Astrophysics Data System (ADS)
Schmitz, Arne; Schinnenburg, Marc; Gross, James; Aguiar, Ana
For any communication system the Signal-to-Interference-plus-Noise-Ratio of the link is a fundamental metric. Recall (cf. Chapter 9) that the SINR is defined as the ratio between the received power of the signal of interest and the sum of all "disturbing" power sources (i.e. interference and noise). From information theory it is known that a higher SINR increases the maximum possible error-free transmission rate (referred to as Shannon capacity [417] of any communication system and vice versa). Conversely, the higher the SINR, the lower will be the bit error rate in practical systems. While one aspect of the SINR is the sum of all distracting power sources, another issue is the received power. This depends on the transmitted power, the used antennas, possibly on signal processing techniques and ultimately on the channel gain between transmitter and receiver.
de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie
2011-12-14
We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Marini, J. W.
1977-01-01
A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.
Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model
NASA Astrophysics Data System (ADS)
Yu, Lean; Wang, Shouyang; Lai, K. K.
Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.
Error analysis on squareness of multi-sensor integrated CMM for the multistep registration method
NASA Astrophysics Data System (ADS)
Zhao, Yan; Wang, Yiwen; Ye, Xiuling; Wang, Zhong; Fu, Luhua
2018-01-01
The multistep registration(MSR) method in [1] is to register two different classes of sensors deployed on z-arm of CMM(coordinate measuring machine): a video camera and a tactile probe sensor. In general, it is difficult to obtain a very precise registration result with a single common standard, instead, this method is achieved by measuring two different standards with a constant distance between them two which are fixed on a steel plate. Although many factors have been considered such as the measuring ability of sensors, the uncertainty of the machine and the number of data pairs, there is no exact analysis on the squareness between the x-axis and the y-axis on the xy plane. For this sake, error analysis on the squareness of multi-sensor integrated CMM for the multistep registration method will be made to examine the validation of the MSR method. Synthetic experiments on the squareness on the xy plane for the simplified MSR with an inclination rotation are simulated, which will lead to a regular result. Experiments have been carried out with the multi-standard device designed also in [1], meanwhile, inspections with the help of a laser interferometer on the xy plane have been carried out. The final results are conformed to the simulations, and the squareness errors of the MSR method are also similar to the results of interferometer. In other word, the MSR can also adopted/utilized to verify the squareness of a CMM.
Random errors in interferometry with the least-squares method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Qi
2011-01-20
This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less
Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.
ERIC Educational Resources Information Center
Poole, Keith T.
1990-01-01
A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…
ERIC Educational Resources Information Center
Wilson, Celia M.
2010-01-01
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…
Medina, K.D.; Tasker, Gary D.
1985-01-01
The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)
A theorem regarding roots of the zero-order Bessel function of the first kind
NASA Technical Reports Server (NTRS)
Lin, X.-A.; Agrawal, O. P.
1993-01-01
This paper investigates a problem on the steady-state, conduction-convection heat transfer process in cylindrical porous heat exchangers. The governing partial differential equations for the system are obtained using the energy conservation law. Solution of these equations and the concept of enthalpy lead to a new approach to prove a theorem that the sum of inverse squares of all the positive roots of the zero order Bessel function of the first kind equals to one-forth. As a corollary, it is shown that the sum of one over pth power (p greater than or equal to 2) of the roots converges to some constant.
Simple Forest Canopy Thermal Exitance Model
NASA Technical Reports Server (NTRS)
Smith J. A.; Goltz, S. M.
1999-01-01
We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.
A shock-tube determination of the SiO /A 1 Pi - X 1 Sigma +/ transition moment
NASA Technical Reports Server (NTRS)
Park, C.; Arnold, J. O.
1978-01-01
The sum of the squares of the electronic transition moments for the A 1 Pi - X 1 Sigma + band system of SiO has been determined from absorption measurements conducted in the reflected-shock region of a shock tube. The test gas was produced by shock-heating a mixture of N2O, SiCl4, and Ar, and the spectra were recorded photographically in the 260-290-nm wavelength range. The values of the sum as a function of internuclear distance between 2.8 and 3.3 Bohr were determined by comparing the measured absorption spectrum with that produced by a line-by-line synthetic-spectrum calculation which accounted for instrumental broadening. The value of the sum so deduced at an internuclear distance of 3.0 Bohr was 1.0 + or - 0.3 atomic units.
Validation of in vivo 2D displacements from spiral cine DENSE at 3T.
Wehner, Gregory J; Suever, Jonathan D; Haggerty, Christopher M; Jing, Linyuan; Powell, David K; Hamlet, Sean M; Grabau, Jonathan D; Mojsejenko, Walter Dimitri; Zhong, Xiaodong; Epstein, Frederick H; Fornwalt, Brandon K
2015-01-30
Displacement Encoding with Stimulated Echoes (DENSE) encodes displacement into the phase of the magnetic resonance signal. Due to the stimulated echo, the signal is inherently low and fades through the cardiac cycle. To compensate, a spiral acquisition has been used at 1.5T. This spiral sequence has not been validated at 3T, where the increased signal would be valuable, but field inhomogeneities may result in measurement errors. We hypothesized that spiral cine DENSE is valid at 3T and tested this hypothesis by measuring displacement errors at both 1.5T and 3T in vivo. Two-dimensional spiral cine DENSE and tagged imaging of the left ventricle were performed on ten healthy subjects at 3T and six healthy subjects at 1.5T. Intersection points were identified on tagged images near end-systole. Displacements from the DENSE images were used to project those points back to their origins. The deviation from a perfect grid was used as a measure of accuracy and quantified as root-mean-squared error. This measure was compared between 3T and 1.5T with the Wilcoxon rank sum test. Inter-observer variability of strains and torsion quantified by DENSE and agreement between DENSE and harmonic phase (HARP) were assessed by Bland-Altman analyses. The signal to noise ratio (SNR) at each cardiac phase was compared between 3T and 1.5T with the Wilcoxon rank sum test. The displacement accuracy of spiral cine DENSE was not different between 3T and 1.5T (1.2 ± 0.3 mm and 1.2 ± 0.4 mm, respectively). Both values were lower than the DENSE pixel spacing of 2.8 mm. There were no substantial differences in inter-observer variability of DENSE or agreement of DENSE and HARP between 3T and 1.5T. Relative to 1.5T, the SNR at 3T was greater by a factor of 1.4 ± 0.3. The spiral cine DENSE acquisition that has been used at 1.5T to measure cardiac displacements can be applied at 3T with equivalent accuracy. The inter-observer variability and agreement of DENSE-derived peak strains and torsion with HARP is also comparable at both field strengths. Future studies with spiral cine DENSE may take advantage of the additional SNR at 3T.
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
Ramanujan sums for signal processing of low-frequency noise.
Planat, Michel; Rosu, Haret; Perrine, Serge
2002-11-01
An aperiodic (low-frequency) spectrum may originate from the error term in the mean value of an arithmetical function such as Möbius function or Mangoldt function, which are coding sequences for prime numbers. In the discrete Fourier transform the analyzing wave is periodic and not well suited to represent the low-frequency regime. In place we introduce a different signal processing tool based on the Ramanujan sums c(q)(n), well adapted to the analysis of arithmetical sequences with many resonances p/q. The sums are quasiperiodic versus the time n and aperiodic versus the order q of the resonance. Different results arise from the use of this Ramanujan-Fourier transform in the context of arithmetical and experimental signals.
Ramanujan sums for signal processing of low-frequency noise
NASA Astrophysics Data System (ADS)
Planat, Michel; Rosu, Haret; Perrine, Serge
2002-11-01
An aperiodic (low-frequency) spectrum may originate from the error term in the mean value of an arithmetical function such as Möbius function or Mangoldt function, which are coding sequences for prime numbers. In the discrete Fourier transform the analyzing wave is periodic and not well suited to represent the low-frequency regime. In place we introduce a different signal processing tool based on the Ramanujan sums cq(n), well adapted to the analysis of arithmetical sequences with many resonances p/q. The sums are quasiperiodic versus the time n and aperiodic versus the order q of the resonance. Different results arise from the use of this Ramanujan-Fourier transform in the context of arithmetical and experimental signals.
Exemplar-Based Clustering via Simulated Annealing
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2009-01-01
Several authors have touted the p-median model as a plausible alternative to within-cluster sums of squares (i.e., K-means) partitioning. Purported advantages of the p-median model include the provision of "exemplars" as cluster centers, robustness with respect to outliers, and the accommodation of a diverse range of similarity data. We developed…
On Arithmetic-Geometric-Mean Polynomials
ERIC Educational Resources Information Center
Griffiths, Martin; MacHale, Des
2017-01-01
We study here an aspect of an infinite set "P" of multivariate polynomials, the elements of which are associated with the arithmetic-geometric-mean inequality. In particular, we show in this article that there exist infinite subsets of probability "P" for which every element may be expressed as a finite sum of squares of real…
Market-Based Resource Allocation in a Wirelessly Integrated Naval Engineering Plant
2009-12-01
conflicts, and the fourth term summing lower diagonal conflicts. Each combination of squares q,j and qu returns 1 if there is a queen conflict and 0 if...S. J., Hill, J., Szewczyk, R. and Woo, A. (2002). " MICA - The Commercialization of Microsensor Motes," Sensor Magazine, Advanstar Communications Inc
On the null distribution of Bayes factors in linear regression
USDA-ARS?s Scientific Manuscript database
We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
Self-duality in higher dimensions
NASA Astrophysics Data System (ADS)
Bilge, A. H.; Dereli, T.; Kocak, S.
2017-01-01
Let ω be a 2-form on a 2n dimensional manifold. In previous work, we called ω “strong self-dual, if the eigenvalues of its matrix with respect to an orthonormal frame are equal in absolute value. In a series of papers, we showed that strong self-duality agrees with previous definitions; in particular if ω is strong self-dual, then, in 2n dimensions, ωn is proportional to its Hodge dual ω and in 4n dimensions, ωn is Hodge self-dual. We also obtained a local expression of the Bonan 4-form on 8 manifolds with Spin 7 holonomy, as the sum of the squares of any orthonormal basis of a maximal linear subspace of strong self-dual 2-forms. In the present work we generalize the notion of strong self-duality to odd dimensional manifolds and we express the dual of the Fundamental 3-form 7 manifolds with G 2 holonomy, as a sum of the squares of an orthonormal basis of a maximal linear subspace of strong self-dual 2-forms.
Sum-of-Squares-Based Region of Attraction Analysis for Gain-Scheduled Three-Loop Autopilot
NASA Astrophysics Data System (ADS)
Seo, Min-Won; Kwon, Hyuck-Hoon; Choi, Han-Lim
2018-04-01
A conventional method of designing a missile autopilot is to linearize the original nonlinear dynamics at several trim points, then to determine linear controllers for each linearized model, and finally implement gain-scheduling technique. The validation of such a controller is often based on linear system analysis for the linear closed-loop system at the trim conditions. Although this type of gain-scheduled linear autopilot works well in practice, validation based solely on linear analysis may not be sufficient to fully characterize the closed-loop system especially when the aerodynamic coefficients exhibit substantial nonlinearity with respect to the flight condition. The purpose of this paper is to present a methodology for analyzing the stability of a gain-scheduled controller in a setting close to the original nonlinear setting. The method is based on sum-of-squares (SOS) optimization that can be used to characterize the region of attraction of a polynomial system by solving convex optimization problems. The applicability of the proposed SOS-based methodology is verified on a short-period autopilot of a skid-to-turn missile.
Sensitivity analysis of a ground-water-flow model
Torak, Lynn J.; ,
1991-01-01
A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.
Linear monogamy of entanglement in three-qubit systems
NASA Astrophysics Data System (ADS)
Liu, Feng; Gao, Fei; Wen, Qiao-Yan
2015-11-01
For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C.
Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O
2009-04-01
This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.
Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares
NASA Technical Reports Server (NTRS)
Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.
2012-01-01
A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.
Linear monogamy of entanglement in three-qubit systems.
Liu, Feng; Gao, Fei; Wen, Qiao-Yan
2015-11-16
For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C.
Linear monogamy of entanglement in three-qubit systems
Liu, Feng; Gao, Fei; Wen, Qiao-Yan
2015-01-01
For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C. PMID:26568265
Asteroid orbit fitting with radar and angular observations
NASA Astrophysics Data System (ADS)
Baturin, A. P.
2013-12-01
The asteroid orbit fitting problem using their radar and angular observations has been considered. The problem was solved in a standanrd way by means of minimization of weighted sum of squares of residuals. In the orbit fitting both kinds of radar observa-tions have been used: the observations of time delays and of Doppler frequency shifts. The weight for angular observations has been set the same for all of them and has been determined as inverse mean-square residual obtained in the orbit fitting using just angular observations. The weights of radar observations have been set as inverse squared errors of these observations published together with them in the Minor Planet Center electronical circulars (MPECs). For the orbit fitting some five asteroids have been taken from these circulars. The asteroids have been chosen fulfilling the requirement of more than six radar observations of them to be available. The asteroids are 1950 DA, 1999 RQ36, 2002 NY40, 2004 DC and 2005 EU2. Several orbit fittings for these aster-oids have been done: with just angular observations; with just radar observations; with both angular and radar observations. The obtained results are quite acceptable because in the last case the mean-square angular residuals are approximately equal to the same ones obtained in the fitting with just angular observations. As to radar observations mean-square residuals, the time delay residuals for three asteroids do not exceed 1 μs, for two others ˜ 10 μs and the Doppler shift residuals for three asteroids do not exceed 1 Hz, for two others ˜ 10 Hz. The motion equations included perturbations from 9 planets and the Moon using their ephemerides DE422. The numerical integration has been performed with Everhart 27-order method with variable step. All calculations have been exe-cuted to a 34-digit decimal precision (i.e. using 128-bit floating-point numbers). Further, the sizes of confidence ellipsoids of im-proved orbit parameters have been compared. It has been accepted that an indicator of ellipsoid size is a geometric mean of its six semi-axes. A comparison of sizes has shown that confidence ellipsoids obtained in orbit fitting with both angular and radar obser-vations are several times less than ellipsoids obtained with just angular observations.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Super-linear Precision in Simple Neural Population Codes
NASA Astrophysics Data System (ADS)
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging
NASA Astrophysics Data System (ADS)
Wang, Zong; Shi, Wenjiao
2017-03-01
Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.
Behavior of inactivation kinetics of Escherichia coli by dense phase carbon dioxide.
Liao, Hongmei; Zhang, Yan; Hu, Xiaosong; Liao, Xiaojun; Wu, Jihong
2008-08-15
Inactivation of Escherichia coli in cloudy apple juice by dense phase carbon dioxide (DPCD) was investigated. The pressures were 10, 20 and 30 MPa, the temperatures were 32, 37 and 42 degrees C. The inactivation kinetic behavior of E. coli conformed to a sigmoid curve with a shoulder and a tail, which was closely related with temperature or pressure. With the increase of temperature or pressure, the shoulder became unclear or even disappeared. The experimental data were well fitted to a model proposed by Xiong et al. [Xiong, R., Xie, G., Edmondson, A.E., Sheard, M.A., 1999. A mathematical model for bacterial inactivation. International Journal of Food Microbiology 46, 45-55], the kinetic parameters of t(lag) (the lag time length), f (the initial proportion of less resistant population), k(1) (the inactivation rate constant of less resistant fraction) and k(2) (the inactivation rate constant of resistant fraction), and t(4)(-)(D) (the time required for an 4-log-cycle reduction of bacteria under a given condition) were obtained from this model. The t(lag) declined from 4.032 to 0.890 min and t(4)(-)(D) from 54.955 to 18.840 min, k(1) was 1.74-4.4 times of k(2). Moreover, the model was validated by more experimental data, the accuracy factor (Af), bias factor (Bf), root mean square error (RMSE), sum of squares (SS), and correlation coefficient (R(2)) were used to evaluate this model performance, indicating that the model could provide a good fitting to the experimental data.
Identification of pilot dynamics from in-flight tracking data
NASA Technical Reports Server (NTRS)
Hess, R. A.; Mnich, M. A.
1985-01-01
Data from a representative flight task involving an F-14 'pursuer' aircraft tracking a T-38 'target' aircraft in a 3G wind-up turn and in level flight are processed using a least squares identification technique in an attempt to identify pilot/vehicle dynamics. Comparative identification results are provided by a Fourier coefficient method which requires a carefully designed and implemented input consisting of a sum of sinusoids. The least-squares results compare favorably with those obtained by the Fourier technique. An example of crossover frequency regression is discussed in the light of the conditions of one of the flight configurations.
NASA Astrophysics Data System (ADS)
Gidey, Amanuel
2018-06-01
Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
Ockhuijsen, Henrietta D L; van Smeden, Maarten; van den Hoogen, Agnes; Boivin, Jacky
2017-06-01
To examine construct and criterion validity of the Dutch SCREENIVF among women and men undergoing a fertility treatment. A prospective longitudinal study nested in a randomized controlled trial. University hospital. Couples, 468 women and 383 men, undergoing an IVF/intracytoplasmic sperm injection (ICSI) treatment in a fertility clinic, completed the SCREENIVF. Construct and criteria validity of the SCREENIVF. The comparative fit index and root mean square error of approximation for women and men show a good fit of the factor model. Across time, the sensitivity for Hospital Anxiety and Depression Scale subscale in women ranged from 61%-98%, specificity 53%-65%, predictive value of a positive test (PVP) 13%-56%, predictive value of a negative test (PVN) 70%-99%. The sensitivity scores for men ranged from 38%-100%, specificity 71%-75%, PVP 9%-27%, PVN 92%-100%. A prediction model revealed that for women 68.7% of the variance in the Hospital Anxiety and Depression Scale on time 1 and 42.5% at time 2 and 38.9% at time 3 was explained by the predictors, the sum score scales of the SCREENIVF. For men, 58.1% of the variance in the Hospital Anxiety and Depression Scale on time 1 and 46.5% at time 2 and 37.3% at time 3 was explained by the predictors, the sum score scales of the SCREENIVF. The SCREENIVF has good construct validity but the concurrent validity is better than the predictive validity. SCREENIVF will be most effectively used in fertility clinics at the start of treatment and should not be used as a predictive tool. Copyright © 2017 American Society for Reproductive Medicine. All rights reserved.
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
[NIR Assignment of Magnolol by 2D-COS Technology and Model Application Huoxiangzhengqi Oral Liduid].
Pei, Yan-ling; Wu, Zhi-sheng; Shi, Xin-yuan; Pan, Xiao-ning; Peng, Yan-fang; Qiao, Yan-jiang
2015-08-01
Near infrared (NIR) spectroscopy assignment of Magnolol was performed using deuterated chloroform solvent and two-dimensional correlation spectroscopy (2D-COS) technology. According to the synchronous spectra of deuterated chloroform solvent and Magnolol, 1365~1455, 1600~1720, 2000~2181 and 2275~2465 nm were the characteristic absorption of Magnolol. Connected with the structure of Magnolol, 1440 nm was the stretching vibration of phenolic group O-H, 1679 nm was the stretching vibration of aryl and methyl which connected with aryl, 2117, 2304, 2339 and 2370 nm were the combination of the stretching vibration, bending vibration and deformation vibration for aryl C-H, 2445 nm were the bending vibration of methyl which linked with aryl group, these bands attribut to the characteristics of Magnolol. Huoxiangzhengqi Oral Liduid was adopted to study the Magnolol, the characteristic band by spectral assignment and the band by interval Partial Least Squares (iPLS) and Synergy interval Partial Least Squares (SiPLS) were used to establish Partial Least Squares (PLS) quantitative model, the coefficient of determination Rcal(2) and Rpre(2) were greater than 0.99, the Root Mean of Square Error of Calibration (RM-SEC), Root Mean of Square Error of Cross Validation (RMSECV) and Root Mean of Square Error of Prediction (RMSEP) were very small. It indicated that the characteristic band by spectral assignment has the same results with the Chemometrics in PLS model. It provided a reference for NIR spectral assignment of chemical compositions in Chinese Materia Medica, and the band filters of NIR were interpreted.
Froud, Robert; Abel, Gary
2014-01-01
Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472
ERIC Educational Resources Information Center
Monahan, Patrick O.; Ankenmann, Robert D.
2010-01-01
When the matching score is either less than perfectly reliable or not a sufficient statistic for determining latent proficiency in data conforming to item response theory (IRT) models, Type I error (TIE) inflation may occur for the Mantel-Haenszel (MH) procedure or any differential item functioning (DIF) procedure that matches on summed-item…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidtlein, CR; Beattie, B; Humm, J
2014-06-15
Purpose: To investigate the performance of a new penalized-likelihood PET image reconstruction algorithm using the 1{sub 1}-norm total-variation (TV) sum of the 1st through 4th-order gradients as the penalty. Simulated and brain patient data sets were analyzed. Methods: This work represents an extension of the preconditioned alternating projection algorithm (PAPA) for emission-computed tomography. In this new generalized algorithm (GPAPA), the penalty term is expanded to allow multiple components, in this case the sum of the 1st to 4th order gradients, to reduce artificial piece-wise constant regions (“staircase” artifacts typical for TV) seen in PAPA images penalized with only the 1stmore » order gradient. Simulated data were used to test for “staircase” artifacts and to optimize the penalty hyper-parameter in the root-mean-squared error (RMSE) sense. Patient FDG brain scans were acquired on a GE D690 PET/CT (370 MBq at 1-hour post-injection for 10 minutes) in time-of-flight mode and in all cases were reconstructed using resolution recovery projectors. GPAPA images were compared PAPA and RMSE-optimally filtered OSEM (fully converged) in simulations and to clinical OSEM reconstructions (3 iterations, 32 subsets) with 2.6 mm XYGaussian and standard 3-point axial smoothing post-filters. Results: The results from the simulated data show a significant reduction in the 'staircase' artifact for GPAPA compared to PAPA and lower RMSE (up to 35%) compared to optimally filtered OSEM. A simple power-law relationship between the RMSE-optimal hyper-parameters and the noise equivalent counts (NEC) per voxel is revealed. Qualitatively, the patient images appear much sharper and with less noise than standard clinical images. The convergence rate is similar to OSEM. Conclusions: GPAPA reconstructions using the 1{sub 1}-norm total-variation sum of the 1st through 4th-order gradients as the penalty show great promise for the improvement of image quality over that currently achieved with clinical OSEM reconstructions.« less
NASA Technical Reports Server (NTRS)
Cai, Zhiqiang; Manteuffel, Thomas A.; McCormick, Stephen F.
1996-01-01
In this paper, we study the least-squares method for the generalized Stokes equations (including linear elasticity) based on the velocity-vorticity-pressure formulation in d = 2 or 3 dimensions. The least squares functional is defined in terms of the sum of the L(exp 2)- and H(exp -1)-norms of the residual equations, which is weighted appropriately by by the Reynolds number. Our approach for establishing ellipticity of the functional does not use ADN theory, but is founded more on basic principles. We also analyze the case where the H(exp -1)-norm in the functional is replaced by a discrete functional to make the computation feasible. We show that the resulting algebraic equations can be uniformly preconditioned by well-known techniques.
Convex lattice polygons of fixed area with perimeter-dependent weights.
Rajesh, R; Dhar, Deepak
2005-01-01
We study fully convex polygons with a given area, and variable perimeter length on square and hexagonal lattices. We attach a weight tm to a convex polygon of perimeter m and show that the sum of weights of all polygons with a fixed area s varies as s(-theta(conv))eK(t)square root(s) for large s and t less than a critical threshold tc, where K(t) is a t-dependent constant, and theta(conv) is a critical exponent which does not change with t. Using heuristic arguments, we find that theta(conv) is 1/4 for the square lattice, but -1/4 for the hexagonal lattice. The reason for this unexpected nonuniversality of theta(conv) is traced to existence of sharp corners in the asymptotic shape of these polygons.
Measurement of human pilot dynamic characteristics in flight simulation
NASA Technical Reports Server (NTRS)
Reedy, James T.
1987-01-01
Fast Fourier Transform (FFT) and Least Square Error (LSE) estimation techniques were applied to the problem of identifying pilot-vehicle dynamic characteristics in flight simulation. A brief investigation of the effects of noise, input bandwidth and system delay upon the FFT and LSE techniques was undertaken using synthetic data. Data from a piloted simulation conducted at NASA Ames Research Center was then analyzed. The simulation was performed in the NASA Ames Research Center Variable Stability CH-47B helicopter operating in fixed-basis simulator mode. The piloting task consisted of maintaining the simulated vehicle over a moving hover pad whose motion was described by a random-appearing sum of sinusoids. The two test subjects used a head-down, color cathode ray tube (CRT) display for guidance and control information. Test configurations differed in the number of axes being controlled by the pilot (longitudinal only versus longitudinal and lateral), and in the presence or absence of an important display indicator called an 'acceleration ball'. A number of different pilot-vehicle transfer functions were measured, and where appropriate, qualitatively compared with theoretical pilot- vehicle models. Some indirect evidence suggesting pursuit behavior on the part of the test subjects is discussed.
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Phuah, Eng-Tong; Lee, Yee-Ying; Tang, Teck-Kim
2018-01-01
Diacylglycerol (DAG) and monoacylglycerol (MAG) are two natural occurring minor components found in most edible fats and oils. These compounds have gained increasing market demand owing to their unique physicochemical properties. Enzymatic glycerolysis in solvent-free system might be a promising approach in producing DAG and MAG-enriched oil. Understanding on glycerolysis mechanism is therefore of great importance for process simulation and optimization. In this study, a commercial immobilized lipase (Lipozyme TL IM) was used to catalyze the glycerolysis reaction. The kinetics of enzymatic glycerolysis reaction between triacylglycerol (TAG) and glycerol (G) were modeled using rate equation with unsteady-state assumption. Ternary complex, ping-pong bi-bi and complex ping-pong bi-bi models were proposed and compared in this study. The reaction rate constants were determined using non-linear regression and sum of square errors (SSE) were minimized. Present work revealed satisfactory agreement between experimental data and the result generated by complex ping-pong bi-bi model as compared to other models. The proposed kinetic model would facilitate understanding on enzymatic glycerolysis for DAG and MAG production and design optimization of a pilot-scale reactor. PMID:29401481
NASA Technical Reports Server (NTRS)
Lanyi, Gabor E.; Roth, Titus
1988-01-01
Total ionospheric electron contents (TEC) were measured by global positioning system (GPS) dual-frequency receivers developed by the Jet Propulsion Laboratory. The measurements included P-code (precise ranging code) and carrier phase data for six GPS satellites during multiple five-hour observing sessions. A set of these GPS TEC measurements were mapped from the GPS lines of sight to the line of sight of a Faraday beacon satellite by statistically fitting the TEC data to a simple model of the ionosphere. The mapped GPS TEC values were compared with the Faraday rotation measurements. Because GPS transmitter offsets are different for each satellite and because some GPS receiver offsets were uncalibrated, the sums of the satellite and receiver offsets were estimated simultaneously with the TEC in a least squares procedure. The accuracy of this estimation procedure is evaluated indicating that the error of the GPS-determined line of sight TEC can be at or below 1 x 10 to the 16th el/sq cm. Consequently, the current level of accuracy is comparable to the Faraday rotation technique; however, GPS provides superior sky coverage.
Solar radiation and precipitable water modeling for Turkey using artificial neural networks
NASA Astrophysics Data System (ADS)
Şenkal, Ozan
2015-08-01
Artificial neural network (ANN) method was applied for modeling and prediction of mean precipitable water and solar radiation in a given location and given date (month), given altitude, temperature, pressure and humidity in Turkey (26-45ºE and 36-42ºN) during the period of 2000-2002. Resilient Propagation (RP) learning algorithms and logistic sigmoid transfer function were used in the network. To train the network, meteorological measurements taken by the Turkish State Meteorological Service (TSMS) and Wyoming University for the period from 2000 to 2002 from five stations distributed in Turkey were used as training data. Data from years (2000 and 2001) were used for training, while the year 2002 was used for testing and validating the model. The RP algorithm were first used for determination of the precipitable water and subsequently, computation of the solar radiation, in these stations Root Mean Square Error (RMSE) between the estimated and measured values for monthly mean daily sum for precipitable water and solar radiation values have been found as 0.0062 gr/cm2 and 0.0603 MJ/m2 (training cities), 0.5652 gr/cm2 and 3.2810 MJ/m2 (testing cities), respectively.
Channel Acquisition for Massive MIMO-OFDM With Adjustable Phase Shift Pilots
NASA Astrophysics Data System (ADS)
You, Li; Gao, Xiqi; Swindlehurst, A. Lee; Zhong, Wen
2016-03-01
We propose adjustable phase shift pilots (APSPs) for channel acquisition in wideband massive multiple-input multiple-output (MIMO) systems employing orthogonal frequency division multiplexing (OFDM) to reduce the pilot overhead. Based on a physically motivated channel model, we first establish a relationship between channel space-frequency correlations and the channel power angle-delay spectrum in the massive antenna array regime, which reveals the channel sparsity in massive MIMO-OFDM. With this channel model, we then investigate channel acquisition, including channel estimation and channel prediction, for massive MIMO-OFDM with APSPs. We show that channel acquisition performance in terms of sum mean square error can be minimized if the user terminals' channel power distributions in the angle-delay domain can be made non-overlapping with proper phase shift scheduling. A simplified pilot phase shift scheduling algorithm is developed based on this optimal channel acquisition condition. The performance of APSPs is investigated for both one symbol and multiple symbol data models. Simulations demonstrate that the proposed APSP approach can provide substantial performance gains in terms of achievable spectral efficiency over the conventional phase shift orthogonal pilot approach in typical mobility scenarios.
Moving-window dynamic optimization: design of stimulation profiles for walking.
Dosen, Strahinja; Popović, Dejan B
2009-05-01
The overall goal of the research is to improve control for electrical stimulation-based assistance of walking in hemiplegic individuals. We present the simulation for generating offline input (sensors)-output (intensity of muscle stimulation) representation of walking that serves in synthesizing a rule-base for control of electrical stimulation for restoration of walking. The simulation uses new algorithm termed moving-window dynamic optimization (MWDO). The optimization criterion was to minimize the sum of the squares of tracking errors from desired trajectories with the penalty function on the total muscle efforts. The MWDO was developed in the MATLAB environment and tested using target trajectories characteristic for slow-to-normal walking recorded in healthy individual and a model with the parameters characterizing the potential hemiplegic user. The outputs of the simulation are piecewise constant intensities of electrical stimulation and trajectories generated when the calculated stimulation is applied to the model. We demonstrated the importance of this simulation by showing the outputs for healthy and hemiplegic individuals, using the same target trajectories. Results of the simulation show that the MWDO is an efficient tool for analyzing achievable trajectories and for determining the stimulation profiles that need to be delivered for good tracking.
Experimental cross-correlation nitrogen Q-branch CARS thermometry in a spark ignition engine
NASA Astrophysics Data System (ADS)
Lockett, R. D.; Ball, D.; Robertson, G. N.
2013-07-01
A purely experimental technique was employed to derive temperatures from nitrogen Q-branch Coherent Anti-Stokes Raman Scattering (CARS) spectra, obtained in a high pressure, high temperature environment (spark ignition Otto engine). This was in order to obviate any errors arising from deficiencies in the spectral scaling laws which are commonly used to represent nitrogen Q-branch CARS spectra at high pressure. The spectra obtained in the engine were compared with spectra obtained in a calibrated high pressure, high temperature cell, using direct cross-correlation in place of the minimisation of sums of squares of residuals. The technique is demonstrated through the measurement of air temperature as a function of crankshaft angle inside the cylinder of a motored single-cylinder Ricardo E6 research engine, followed by the measurement of fuel-air mixture temperatures obtained during the compression stroke in a knocking Ricardo E6 engine. A standard CARS programme (SANDIA's CARSFIT) was employed to calibrate the altered non-resonant background contribution to the CARS spectra that was caused by the alteration to the mole fraction of nitrogen in the unburned fuel-air mixture. The compression temperature profiles were extrapolated in order to predict the auto-ignition temperatures.
Kumar, Anup; Guria, Chandan; Chitres, G; Chakraborty, Arunangshu; Pathak, A K
2016-10-01
A comprehensive mathematical model involving NPK-10:26:26 fertilizer, NaCl, NaHCO3, light and temperature operating variables for Dunaliella tertiolecta cultivation is formulated to predict microalgae-biomass and lipid productivity. Proposed model includes Monod/Andrews kinetics for the absorption of essential nutrients into algae-biomass and Droop model involving internal nutrient cell quota for microalgae growth, assuming algae-biomass is composed of sugar, functional-pool and neutral-lipid. Biokinetic model parameters are determined by minimizing the residual-sum-of-square-errors between experimental and computed microalgae-biomass and lipid productivity using genetic algorithm. Developed model is validated with the experiments of Dunaliella tertiolecta cultivation using air-agitated sintered-disk chromatographic glass-bubble column and the effects of operating variables on microalgae-biomass and lipid productivity is investigated. Finally, parametric sensitivity analysis is carried out to know the sensitivity of model parameters on the obtained results in the input parameter space. Proposed model may be helpful in scale-up studies and implementation of model-based control strategy in large-scale algal cultivation. Copyright © 2016 Elsevier Ltd. All rights reserved.
An empirical model for estimating solar radiation in the Algerian Sahara
NASA Astrophysics Data System (ADS)
Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous
2018-05-01
The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
NASA Astrophysics Data System (ADS)
Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying
2010-04-01
In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.
Validation of Core Temperature Estimation Algorithm
2016-01-29
plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.
Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred
2017-01-25
The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .
Systematic Error Modeling and Bias Estimation
Zhang, Feihu; Knoll, Alois
2016-01-01
This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386
Modeling Seasonality in Carbon Dioxide Emissions From Fossil Fuel Consumption
NASA Astrophysics Data System (ADS)
Gregg, J. S.; Andres, R. J.
2004-05-01
Using United States data, a method is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels using monthly sales data to estimate the relative monthly proportions of the total annual national fossil fuel use. These proportions are then used to estimate the total monthly carbon dioxide emissions for each state. From these data, the goal is to develop mathematical models that describe the seasonal flux in consumption for each type of fuel, as well as the total emissions for the nation. The time series models have two components. First, the general long-term yearly trend is determined with regression models for the annual totals. After removing the general trend, two alternatives are considered for modeling the seasonality. The first alternative uses the mean of the monthly proportions to predict the seasonal distribution. Because the seasonal patterns are fairly consistent in the United States, this is an effective modeling technique. Such regularity, however, may not be present with data from other nations. Therefore, as a second alternative, an ordinary least squares autoregressive model is used. This model is chosen for its ability to accurately describe dependent data and for its predictive capacity. It also has a meaningful interpretation, as each coefficient in the model quantifies the dependency for each corresponding time lag. Most importantly, it is dynamic, and able to adapt to anomalies and changing patterns. The order of the autoregressive model is chosen by the Akaike Information Criterion (AIC), which minimizes the predicted variance for all models of increasing complexity. To model the monthly fuel consumption, the annual trend is combined with the seasonal model. The models for each fuel type are then summed together to predict the total carbon dioxide emissions. The prediction error is estimated with the root mean square error (RMSE) from the actual estimated emission values. Overall, the models perform very well, with relative RMSE less than 10% for all fuel types, and under 5% for the national total emissions. Development of successful models is important to better understand and predict global environmental impacts from fossil fuel consumption.
Photogrammetric Method and Software for Stream Planform Identification
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.
2013-12-01
Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points
Cursor Control Device Test Battery
NASA Technical Reports Server (NTRS)
Holden, Kritina; Sandor, Aniko; Pace, John; Thompson, Shelby
2013-01-01
The test battery was developed to provide a standard procedure for cursor control device evaluation. The software was built in Visual Basic and consists of nine tasks and a main menu that integrates the set-up of the tasks. The tasks can be used individually, or in a series defined in the main menu. Task 1, the Unidirectional Pointing Task, tests the speed and accuracy of clicking on targets. Two rectangles with an adjustable width and adjustable center- to-center distance are presented. The task is to click back and forth between the two rectangles. Clicks outside of the rectangles are recorded as errors. Task 2, Multidirectional Pointing Task, measures speed and accuracy of clicking on targets approached from different angles. Twenty-five numbered squares of adjustable width are arranged around an adjustable diameter circle. The task is to point and click on the numbered squares (placed on opposite sides of the circle) in consecutive order. Clicks outside of the squares are recorded as errors. Task 3, Unidirectional (horizontal) Dragging Task, is similar to dragging a file into a folder on a computer desktop. Task 3 requires dragging a square of adjustable width from one rectangle and dropping it into another. The width of each rectangle is adjustable, as well as the distance between the two rectangles. Dropping the square outside of the rectangles is recorded as an error. Task 4, Unidirectional Path Following, is similar to Task 3. The task is to drag a square through a tunnel consisting of two lines. The size of the square and the width of the tunnel are adjustable. If the square touches any of the lines, it is counted as an error and the task is restarted. Task 5, Text Selection, involves clicking on a Start button, and then moving directly to the underlined portion of the displayed text and highlighting it. The pointing distance to the text is adjustable, as well as the to-be-selected font size and the underlined character length. If the selection does not include all of the underlined characters, or includes non-underlined characters, it is recorded as an error. Task 6, Multi-size and Multi-distance Pointing, presents the participant with 24 consecutively numbered buttons of different sizes (63 to 163 pixels), and at different distances (60 to 80 pixels) from the Start button. The task is to click on the Start button, and then move directly to, and click on, each numbered target button in consecutive order. Clicks outside of the target area are errors. Task 7, Standard Interface Elements Task, involves interacting with standard interface elements as instructed in written procedures, including: drop-down menus, sliders, text boxes, radio buttons, and check boxes. Task completion time is recorded. In Task 8, a circular track is presented with a disc in it at the top. Track width and disc size are adjustable. The task is to move the disc with circular motion within the path without touching the boundaries of the track. Time and errors are recorded. Task 9 is a discrete task that allows evaluation of discrete cursor control devices that tab from target to target, such as a castle switch. The task is to follow a predefined path and to click on the yellow targets along the path.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields
NASA Astrophysics Data System (ADS)
Bettadpur, S.
2012-04-01
The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.
Retrieval of the aerosol optical thickness from UV global irradiance measurements
NASA Astrophysics Data System (ADS)
Costa, M. J.; Salgueiro, V.; Bortoli, D.; Obregón, M. A.; Antón, M.; Silva, A. M.
2015-12-01
The UV irradiance is measured at Évora since several years, where a CIMEL sunphotometer integrated in AERONET is also installed. In the present work, measurements of UVA (315 - 400 nm) irradiances taken with Kipp&Zonen radiometers, as well as satellite data of ozone total column values, are used in combination with radiative transfer calculations, to estimate the aerosol optical thickness (AOT) in the UV. The retrieved UV AOT in Évora is compared with AERONET AOT (at 340 and 380 nm) and a fairly good agreement is found with a root mean square error of 0.05 (normalized root mean square error of 8.3%) and a mean absolute error of 0.04 (mean percentage error of 2.9%). The methodology is then used to estimate the UV AOT in Sines, an industrialized site on the Atlantic western coast, where the UV irradiance is monitored since 2013 but no aerosol information is available.
Uncertainty based pressure reconstruction from velocity measurement with generalized least squares
NASA Astrophysics Data System (ADS)
Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos
2017-11-01
A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.
On Pell, Pell-Lucas, and balancing numbers.
Karadeniz Gözeri, Gül
2018-01-01
In this paper, we derive some identities on Pell, Pell-Lucas, and balancing numbers and the relationships between them. We also deduce some formulas on the sums, divisibility properties, perfect squares, Pythagorean triples involving these numbers. Moreover, we obtain the set of positive integer solutions of some specific Pell equations in terms of the integer sequences mentioned in the text.
Optimal Partitioning of a Data Set Based on the "p"-Median Model
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2008-01-01
Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…
Monkeys Match and Tally Quantities across Senses
ERIC Educational Resources Information Center
Jordan, Kerry E.; MacLean, Evan L.; Brannon, Elizabeth M.
2008-01-01
We report here that monkeys can actively match the number of sounds they hear to the number of shapes they see and present the first evidence that monkeys sum over sounds and sights. In Experiment 1, two monkeys were trained to choose a simultaneous array of 1-9 squares that numerically matched a sample sequence of shapes or sounds. Monkeys…
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
NASA Astrophysics Data System (ADS)
Endelt, B.
2017-09-01
Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.
Online measurement of urea concentration in spent dialysate during hemodialysis.
Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J
2004-01-01
We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.
Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan
2014-01-01
Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.
Criterion Predictability: Identifying Differences Between [r-squares
ERIC Educational Resources Information Center
Malgady, Robert G.
1976-01-01
An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)
Kalvāns, Andis; Bitāne, Māra; Kalvāne, Gunta
2015-02-01
A historical phenological record and meteorological data of the period 1960-2009 are used to analyse the ability of seven phenological models to predict leaf unfolding and beginning of flowering for two tree species-silver birch Betula pendula and bird cherry Padus racemosa-in Latvia. Model stability is estimated performing multiple model fitting runs using half of the data for model training and the other half for evaluation. Correlation coefficient, mean absolute error and mean squared error are used to evaluate model performance. UniChill (a model using sigmoidal development rate and temperature relationship and taking into account the necessity for dormancy release) and DDcos (a simple degree-day model considering the diurnal temperature fluctuations) are found to be the best models for describing the considered spring phases. A strong collinearity between base temperature and required heat sum is found for several model fitting runs of the simple degree-day based models. Large variation of the model parameters between different model fitting runs in case of more complex models indicates similar collinearity and over-parameterization of these models. It is suggested that model performance can be improved by incorporating the resolved daily temperature fluctuations of the DDcos model into the framework of the more complex models (e.g. UniChill). The average base temperature, as found by DDcos model, for B. pendula leaf unfolding is 5.6 °C and for the start of the flowering 6.7 °C; for P. racemosa, the respective base temperatures are 3.2 °C and 3.4 °C.
Buonaccorsi, Giovanni A; Roberts, Caleb; Cheung, Sue; Watson, Yvonne; O'Connor, James P B; Davies, Karen; Jackson, Alan; Jayson, Gordon C; Parker, Geoff J M
2006-09-01
The quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data is subject to model fitting errors caused by motion during the time-series data acquisition. However, the time-varying features that occur as a result of contrast enhancement can confound motion correction techniques based on conventional registration similarity measures. We have therefore developed a heuristic, locally controlled tracer kinetic model-driven registration procedure, in which the model accounts for contrast enhancement, and applied it to the registration of abdominal DCE-MRI data at high temporal resolution. Using severely motion-corrupted data sets that had been excluded from analysis in a clinical trial of an antiangiogenic agent, we compared the results obtained when using different models to drive the tracer kinetic model-driven registration with those obtained when using a conventional registration against the time series mean image volume. Using tracer kinetic model-driven registration, it was possible to improve model fitting by reducing the sum of squared errors but the improvement was only realized when using a model that adequately described the features of the time series data. The registration against the time series mean significantly distorted the time series data, as did tracer kinetic model-driven registration using a simpler model of contrast enhancement. When an appropriate model is used, tracer kinetic model-driven registration influences motion-corrupted model fit parameter estimates and provides significant improvements in localization in three-dimensional parameter maps. This has positive implications for the use of quantitative DCE-MRI for example in clinical trials of antiangiogenic or antivascular agents.
Xiang, Yongqing; Yakushin, Sergei B; Cohen, Bernard; Raphan, Theodore
2006-12-01
A neural network model was developed to explain the gravity-dependent properties of gain adaptation of the angular vestibuloocular reflex (aVOR). Gain changes are maximal at the head orientation where the gain is adapted and decrease as the head is tilted away from that position and can be described by the sum of gravity-independent and gravity-dependent components. The adaptation process was modeled by modifying the weights and bias values of a three-dimensional physiologically based neural network of canal-otolith-convergent neurons that drive the aVOR. Model parameters were trained using experimental vertical aVOR gain values. The learning rule aimed to reduce the error between eye velocities obtained from experimental gain values and model output in the position of adaptation. Although the model was trained only at specific head positions, the model predicted the experimental data at all head positions in three dimensions. Altering the relative learning rates of the weights and bias improved the model-data fits. Model predictions in three dimensions compared favorably with those of a double-sinusoid function, which is a fit that minimized the mean square error at every head position and served as the standard by which we compared the model predictions. The model supports the hypothesis that gravity-dependent adaptation of the aVOR is realized in three dimensions by a direct otolith input to canal-otolith neurons, whose canal sensitivities are adapted by the visual-vestibular mismatch. The adaptation is tuned by how the weights from otolith input to the canal-otolith-convergent neurons are adapted for a given head orientation.
Diaphragm motion quantification in megavoltage cone-beam CT projection images.
Chen, Mingqing; Siochi, R Alfredo
2010-05-01
To quantify diaphragm motion in megavoltage (MV) cone-beam computed tomography (CBCT) projections. User identified ipsilateral hemidiaphragm apex (IHDA) positions in two full exhale and inhale frames were used to create bounding rectangles in all other frames of a CBCT scan. The bounding rectangle was enlarged to create a region of interest (ROI). ROI pixels were associated with a cost function: The product of image gradients and a gradient direction matching function for an ideal hemidiaphragm determined from 40 training sets. A dynamic Hough transform (DHT) models a hemidiaphragm as a contour made of two parabola segments with a common vertex (the IHDA). The images within the ROIs are transformed into Hough space where a contour's Hough value is the sum of the cost function over all contour pixels. Dynamic programming finds the optimal trajectory of the common vertex in Hough space subject to motion constraints between frames, and an active contour model further refines the result. Interpolated ray tracing converts the positions to room coordinates. Root-mean-square (RMS) distances between these positions and those resulting from an expert's identification of the IHDA were determined for 21 Siemens MV CBCT scans. Computation time on a 2.66 GHz CPU was 30 s. The average craniocaudal RMS error was 1.38 +/- 0.67 mm. While much larger errors occurred in a few near-sagittal frames on one patient's scans, adjustments to algorithm constraints corrected them. The DHT based algorithm can compute IHDA trajectories immediately prior to radiation therapy on a daily basis using localization MVCBCT projection data. This has potential for calibrating external motion surrogates against diaphragm motion.
K, Anbarasi; K, Kasim Mohamed; Vijayaraghavan, Phagalvarthy; Kandaswamy, Deivanayagam
2016-12-01
To design and implement flipped clinical training for undergraduate dental students in removable complete denture treatment and predict its effectiveness by comparing the assessment results of students trained by flipped and traditional methods. Flipped training was designed by shifting the learning from clinics to learning center (phase I) and by preserving the practice in clinics (phase II). In phase I, student-faculty interactive session was arranged to recap prior knowledge. This is followed by a display of audio synchronized video demonstration of the procedure in a repeatable way and subsequent display of possible errors that may occur in treatment with guidelines to overcome such errors. In phase II, live demonstration of the procedure was given. Students were asked to treat three patients under instructor's supervision. The summative assessment was conducted by applying the same checklist criterion and rubric scoring used for the traditional method. Assessment results of three batches of students trained by flipped method (study group) and three traditionally trained previous batches (control group) were taken for comparison by chi-square test. The sum of traditionally trained three batch students who prepared acceptable dentures (score: 2 and 3) and unacceptable dentures (score: 1) was compared with the same of flipped trained three batch students revealed that the number of students who demonstrated competency by preparing acceptable dentures was higher for flipped training (χ 2 =30.996 with p<0.001). The results reveal the supremacy of flipped training in enhancing students competency and hence recommended for training various clinical procedures.
Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.
Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy
2018-01-23
Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.
Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model
ERIC Educational Resources Information Center
Kang, Taehoon; Chen, Troy T.
2011-01-01
The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…
A longitudinal study of low back pain and daily vibration exposure in professional drivers.
Bovenzi, Massimo
2010-01-01
The aim of this study was to investigate the relation between low back pain (LBP) outcomes and measures of daily exposure to whole-body vibration (WBV) in professional drivers. In a study population of 202 male drivers, who were not affected with LBP at the initial survey, LBP in terms of duration, intensity, and disability was investigated over a two-year follow-up period. Vibration measurements were made on representative samples of machines and vehicles. The following measures of daily WBV exposure were obtained: (i) 8-h energy-equivalent frequency-weighted acceleration (highest axis), A(8)(max) in ms(-2) r.m.s.; (ii) A(8)(sum) (root-sum-of-squares) in ms(-2) r.m.s.; (iii) Vibration Dose Value (highest axis), VDV(max) in ms(-1.75); (iv) VDV(sum) (root-sum-of-quads) in ms(-1.75). The cumulative incidence of LBP over the follow-up period was 38.6%. The incidence of high pain intensity and severe disability was 16.8 and 14.4%, respectively. After adjustment for several confounders, VDV(max) or VDV(sum) gave better predictions of LBP outcomes over time than A(8)(max) or A(8)(sum), respectively. Poor predictions were obtained with A(8)(max), which is the currently preferred measure of daily WBV exposure in European countries. In multivariate data analysis, physical work load was a significant predictor of LBP outcomes over the follow-up period. Perceived psychosocial work environment was not associated with LBP.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1976-01-01
The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.
Teng, C-C; Chai, H; Lai, D-M; Wang, S-F
2007-02-01
Previous research has shown that there is no significant relationship between the degree of structural degeneration of the cervical spine and neck pain. We therefore sought to investigate the potential role of sensory dysfunction in chronic neck pain. Cervicocephalic kinesthetic sensibility, expressed by how accurately an individual can reposition the head, was studied in three groups of individuals, a control group of 20 asymptomatic young adults and two groups of middle-aged adults (20 subjects in each group) with or without a history of mild neck pain. An ultrasound-based three-dimensional coordinate measuring system was used to measure the position of the head and to test the accuracy of repositioning. Constant error (indicating that the subject overshot or undershot the intended position) and root mean square errors (representing total errors of accuracy and variability) were measured during repositioning of the head to the neutral head position (Head-to-NHP) and repositioning of the head to the target (Head-to-Target) in three cardinal planes (sagittal, transverse, and frontal). Analysis of covariance (ANCOVA) was used to test the group effect, with age used as a covariate. The constant errors during repositioning from a flexed position and from an extended position to the NHP were significantly greater in the middle-aged subjects than in the control group (beta=0.30 and beta=0.60, respectively; P<0.05 for both). In addition, the root mean square errors during repositioning from a flexed or extended position to the NHP were greater in the middle-aged subjects than in the control group (beta=0.27 and beta=0.49, respectively; P<0.05 for both). The root mean square errors also increased during Head-to-Target in left rotation (beta=0.24;P<0.05), but there was no difference in the constant errors or root mean square errors during Head-to-NHP repositioning from other target positions (P>0.05). The results indicate that, after controlling for age as a covariate, there was no group effect. Thus, age appears to have a profound effect on an individual's ability to accurately reposition the head toward the neutral position in the sagittal plane and repositioning the head toward left rotation. A history of mild chronic neck pain alone had no significant effect on cervicocephalic kinesthetic sensibility.
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
Quantitative Modelling of Trace Elements in Hard Coal.
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard; Felus, Yaron A.
2008-06-01
The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.
Quantitative Modelling of Trace Elements in Hard Coal
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794
NASA Astrophysics Data System (ADS)
Liu, Xing-fa; Cen, Ming
2007-12-01
Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.
The Plastic Zone and Residual Stress near a Notch and a Fatigue Crack in HSLA Steel.
1981-12-16
the first entry) the agreement with theory poor. Fine et al .(2 1) have noted that agreement is good if the stress for zero hysteresis in incremental...showed that: .,a,.~ al . 2irXa 2 d’ A~t 1-n +l.Deff d J + (5) m +n +nny • (6) By algebraic manipulation of Eqn. (5): Deff 2a 3/[-p + (D4y)J (7a’ ek d...valueal: aL - 1/ • 8 Actually, the square root of the sum of the squares of a, for the reference and broadened profiles was employed. Such automation
Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter
2018-03-20
Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.
The Influence of Dimensionality on Estimation in the Partial Credit Model.
ERIC Educational Resources Information Center
De Ayala, R. J.
1995-01-01
The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…
Standardized unfold mapping: a technique to permit left atrial regional data display and analysis.
Williams, Steven E; Tobon-Gomez, Catalina; Zuluaga, Maria A; Chubb, Henry; Butakoff, Constantine; Karim, Rashed; Ahmed, Elena; Camara, Oscar; Rhode, Kawal S
2017-10-01
Left atrial arrhythmia substrate assessment can involve multiple imaging and electrical modalities, but visual analysis of data on 3D surfaces is time-consuming and suffers from limited reproducibility. Unfold maps (e.g., the left ventricular bull's eye plot) allow 2D visualization, facilitate multimodal data representation, and provide a common reference space for inter-subject comparison. The aim of this work is to develop a method for automatic representation of multimodal information on a left atrial standardized unfold map (LA-SUM). The LA-SUM technique was developed and validated using 18 electroanatomic mapping (EAM) LA geometries before being applied to ten cardiac magnetic resonance/EAM paired geometries. The LA-SUM was defined as an unfold template of an average LA mesh, and registration of clinical data to this mesh facilitated creation of new LA-SUMs by surface parameterization. The LA-SUM represents 24 LA regions on a flattened surface. Intra-observer variability of LA-SUMs for both EAM and CMR datasets was minimal; root-mean square difference of 0.008 ± 0.010 and 0.007 ± 0.005 ms (local activation time maps), 0.068 ± 0.063 gs (force-time integral maps), and 0.031 ± 0.026 (CMR LGE signal intensity maps). Following validation, LA-SUMs were used for automatic quantification of post-ablation scar formation using CMR imaging, demonstrating a weak but significant relationship between ablation force-time integral and scar coverage (R 2 = 0.18, P < 0.0001). The proposed LA-SUM displays an integrated unfold map for multimodal information. The method is applicable to any LA surface, including those derived from imaging and EAM systems. The LA-SUM would facilitate standardization of future research studies involving segmental analysis of the LA.
Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.
2008-01-01
A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).
Novel search algorithms for a mid-infrared spectral library of cotton contaminants.
Loudermilk, J Brian; Himmelsbach, David S; Barton, Franklin E; de Haseth, James A
2008-06-01
During harvest, a variety of plant based contaminants are collected along with cotton lint. The USDA previously created a mid-infrared, attenuated total reflection (ATR), Fourier transform infrared (FT-IR) spectral library of cotton contaminants for contaminant identification as the contaminants have negative impacts on yarn quality. This library has shown impressive identification rates for extremely similar cellulose based contaminants in cases where the library was representative of the samples searched. When spectra of contaminant samples from crops grown in different geographic locations, seasons, and conditions and measured with a different spectrometer and accessories were searched, identification rates for standard search algorithms decreased significantly. Six standard algorithms were examined: dot product, correlation, sum of absolute values of differences, sum of the square root of the absolute values of differences, sum of absolute values of differences of derivatives, and sum of squared differences of derivatives. Four categories of contaminants derived from cotton plants were considered: leaf, stem, seed coat, and hull. Experiments revealed that the performance of the standard search algorithms depended upon the category of sample being searched and that different algorithms provided complementary information about sample identity. These results indicated that choosing a single standard algorithm to search the library was not possible. Three voting scheme algorithms based on result frequency, result rank, category frequency, or a combination of these factors for the results returned by the standard algorithms were developed and tested for their capability to overcome the unpredictability of the standard algorithms' performances. The group voting scheme search was based on the number of spectra from each category of samples represented in the library returned in the top ten results of the standard algorithms. This group algorithm was able to identify correctly as many test spectra as the best standard algorithm without relying on human choice to select a standard algorithm to perform the searches.
Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed
2017-11-01
The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.
Improving depth maps of plants by using a set of five cameras
NASA Astrophysics Data System (ADS)
Kaczmarek, Adam L.
2015-03-01
Obtaining high-quality depth maps and disparity maps with the use of a stereo camera is a challenging task for some kinds of objects. The quality of these maps can be improved by taking advantage of a larger number of cameras. The research on the usage of a set of five cameras to obtain disparity maps is presented. The set consists of a central camera and four side cameras. An algorithm for making disparity maps called multiple similar areas (MSA) is introduced. The algorithm was specially designed for the set of five cameras. Experiments were performed with the MSA algorithm and the stereo matching algorithm based on the sum of sum of squared differences (sum of SSD, SSSD) measure. Moreover, the following measures were included in the experiments: sum of absolute differences (SAD), zero-mean SAD (ZSAD), zero-mean SSD (ZSSD), locally scaled SAD (LSAD), locally scaled SSD (LSSD), normalized cross correlation (NCC), and zero-mean NCC (ZNCC). Algorithms presented were applied to images of plants. Making depth maps of plants is difficult because parts of leaves are similar to each other. The potential usability of the described algorithms is especially high in agricultural applications such as robotic fruit harvesting.
Monopulse azimuth measurement in the ATC Radar Beacon System
DOT National Transportation Integrated Search
1971-12-01
A review is made of the application of sum-difference beam : techniques to the ATC Radar Beacon System. A detailed error analysis : is presented for the case of a monopulse azimuth measurement based : on the existing beacon antenna with a modified fe...
Rapid Detection of Volatile Oil in Mentha haplocalyx by Near-Infrared Spectroscopy and Chemometrics.
Yan, Hui; Guo, Cheng; Shao, Yang; Ouyang, Zhen
2017-01-01
Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . The effects of data pre-processing methods on the accuracy of the PLSR calibration models were investigated. The performance of the final model was evaluated according to the correlation coefficient ( R ) and root mean square error of prediction (RMSEP). For PLSR model, the best preprocessing method combination was first-order derivative, standard normal variate transformation (SNV), and mean centering, which had of 0.8805, of 0.8719, RMSEC of 0.091, and RMSEP of 0.097, respectively. The wave number variables linking to volatile oil are from 5500 to 4000 cm-1 by analyzing the loading weights and variable importance in projection (VIP) scores. For SVM model, six LVs (less than seven LVs in PLSR model) were adopted in model, and the result was better than PLSR model. The and were 0.9232 and 0.9202, respectively, with RMSEC and RMSEP of 0.084 and 0.082, respectively, which indicated that the predicted values were accurate and reliable. This work demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in M. haplocalyx . The quality of medicine directly links to clinical efficacy, thus, it is important to control the quality of Mentha haplocalyx . Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . For SVM model, 6 LVs (less than 7 LVs in PLSR model) were adopted in model, and the result was better than PLSR model. It demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in Mentha haplocalyx . Abbreviations used: 1 st der: First-order derivative; 2 nd der: Second-order derivative; LOO: Leave-one-out; LVs: Latent variables; MC: Mean centering, NIR: Near-infrared; NIRS: Near infrared spectroscopy; PCR: Principal component regression, PLSR: Partial least squares regression; RBF: Radial basis function; RMSEC: Root mean square error of cross validation, RMSEC: Root mean square error of calibration; RMSEP: Root mean square error of prediction; SNV: Standard normal variate transformation; SVM: Support vector machine; VIP: Variable Importance in projection.
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Muscle Synergies May Improve Optimization Prediction of Knee Contact Forces During Walking
Walter, Jonathan P.; Kinney, Allison L.; Banks, Scott A.; D'Lima, Darryl D.; Besier, Thor F.; Lloyd, David G.; Fregly, Benjamin J.
2014-01-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values. PMID:24402438
Muscle synergies may improve optimization prediction of knee contact forces during walking.
Walter, Jonathan P; Kinney, Allison L; Banks, Scott A; D'Lima, Darryl D; Besier, Thor F; Lloyd, David G; Fregly, Benjamin J
2014-02-01
The ability to predict patient-specific joint contact and muscle forces accurately could improve the treatment of walking-related disorders. Muscle synergy analysis, which decomposes a large number of muscle electromyographic (EMG) signals into a small number of synergy control signals, could reduce the dimensionality and thus redundancy of the muscle and contact force prediction process. This study investigated whether use of subject-specific synergy controls can improve optimization prediction of knee contact forces during walking. To generate the predictions, we performed mixed dynamic muscle force optimizations (i.e., inverse skeletal dynamics with forward muscle activation and contraction dynamics) using data collected from a subject implanted with a force-measuring knee replacement. Twelve optimization problems (three cases with four subcases each) that minimized the sum of squares of muscle excitations were formulated to investigate how synergy controls affect knee contact force predictions. The three cases were: (1) Calibrate+Match where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously matched, (2) Precalibrate+Predict where experimental knee contact forces were predicted using precalibrated muscle model parameters values from the first case, and (3) Calibrate+Predict where muscle model parameter values were calibrated and experimental knee contact forces were simultaneously predicted, all while matching inverse dynamic loads at the hip, knee, and ankle. The four subcases used either 44 independent controls or five synergy controls with and without EMG shape tracking. For the Calibrate+Match case, all four subcases closely reproduced the measured medial and lateral knee contact forces (R2 ≥ 0.94, root-mean-square (RMS) error < 66 N), indicating sufficient model fidelity for contact force prediction. For the Precalibrate+Predict and Calibrate+Predict cases, synergy controls yielded better contact force predictions (0.61 < R2 < 0.90, 83 N < RMS error < 161 N) than did independent controls (-0.15 < R2 < 0.79, 124 N < RMS error < 343 N) for corresponding subcases. For independent controls, contact force predictions improved when precalibrated model parameter values or EMG shape tracking was used. For synergy controls, contact force predictions were relatively insensitive to how model parameter values were calibrated, while EMG shape tracking made lateral (but not medial) contact force predictions worse. For the subject and optimization cost function analyzed in this study, use of subject-specific synergy controls improved the accuracy of knee contact force predictions, especially for lateral contact force when EMG shape tracking was omitted, and reduced prediction sensitivity to uncertainties in muscle model parameter values.
New-Sum: A Novel Online ABFT Scheme For General Iterative Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram
Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less
John R. Brooks; Harry V., Jr. Wiant
2007-01-01
Five economically important Appalachian hardwood species were selected from five ecoregions in West Virginia. A nonlinear extra sum of squares procedure was employed to test whether the height-diameter relationships, based on measurements from the 2000 inventory from West Virginia, were significantly different at the ecoregion level. For all species examined, the null...
Proof without Words: Squares Modulo 3
ERIC Educational Resources Information Center
Nelsen, Roger B.
2013-01-01
Using the fact that the sum of the first n odd numbers is n[superscript 2], we show visually that n[superscript 2] is the same as 0 (mod 3) when n is the same as 0 (mod 3), and n[superscript 2] is the same as 1 (mod 3) when n is the same as plus or minus 1 (mod 3).
Teaching Graphical Simulations of Fourier Series Expansion of Some Periodic Waves Using Spreadsheets
ERIC Educational Resources Information Center
Singh, Iqbal; Kaur, Bikramjeet
2018-01-01
The present article demonstrates a way of programming using an Excel spreadsheet to teach Fourier series expansion in school/colleges without the knowledge of any typical programming language. By using this, a student learns to approximate partial sum of the n terms of Fourier series for some periodic signals such as square wave, saw tooth wave,…
Interactive Visual Least Absolutes Method: Comparison with the Least Squares and the Median Methods
ERIC Educational Resources Information Center
Kim, Myung-Hoon; Kim, Michelle S.
2016-01-01
A visual regression analysis using the least absolutes method (LAB) was developed, utilizing an interactive approach of visually minimizing the sum of the absolute deviations (SAB) using a bar graph in Excel; the results agree very well with those obtained from nonvisual LAB using a numerical Solver in Excel. These LAB results were compared with…
2015-12-01
issues. A weighted mean can be used in place of the grand mean3 and the STATA software automatically handles the assignment of the sums of squares. Thus...between groups (i.e., sphericity) using the multivariate test of means provided in STATA 12.1. This test checks whether or not population variances and
Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel
ERIC Educational Resources Information Center
Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying
2017-01-01
In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…
Teaching Tip: When a Matrix and Its Inverse Are Stochastic
ERIC Educational Resources Information Center
Ding, J.; Rhee, N. H.
2013-01-01
A stochastic matrix is a square matrix with nonnegative entries and row sums 1. The simplest example is a permutation matrix, whose rows permute the rows of an identity matrix. A permutation matrix and its inverse are both stochastic. We prove the converse, that is, if a matrix and its inverse are both stochastic, then it is a permutation matrix.
Genotype * environment interaction: a case study for Douglas-fir in western Oregon.
Robert K. Campbell
1992-01-01
Unrecognized genotype x environment interactions (g,e) can bias genetic-gain predictions and models for predicting growth dynamics or species perturbations by global climate change. This study tested six sets of families in 10 plantation sites in a 78-thousand-hectare breeding zone. Plantation differences accounted for 71 percent of sums of squares (15-year heights),...
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Latin-square three-dimensional gage master
Jones, L.
1981-05-12
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Latin square three dimensional gage master
Jones, Lynn L.
1982-01-01
A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.
Ghosh, Payel; Chandler, Adam G; Altinmakas, Emre; Rong, John; Ng, Chaan S
2016-01-01
The aim of this study was to investigate the feasibility of shuttle-mode computed tomography (CT) technology for body perfusion applications by quantitatively assessing and correcting motion artifacts. Noncontrast shuttle-mode CT scans (10 phases, 2 nonoverlapping bed locations) were acquired from 4 patients on a GE 750HD CT scanner. Shuttling effects were quantified using Euclidean distances (between-phase and between-bed locations) of corresponding fiducial points on the shuttle and reference phase scans (prior to shuttle mode). Motion correction with nonrigid registration was evaluated using sum-of-squares differences and distances between centers of segmented volumes of interest on shuttle and references images. Fiducial point analysis showed an average shuttling motion of 0.85 ± 1.05 mm (between-bed) and 1.18 ± 1.46 mm (between-phase), respectively. The volume-of-interest analysis of the nonrigid registration results showed improved sum-of-squares differences from 2950 to 597, between-bed distance from 1.64 to 1.20 mm, and between-phase distance from 2.64 to 1.33 mm, respectively, averaged over all cases. Shuttling effects introduced during shuttle-mode CT acquisitions can be computationally corrected for body perfusion applications.
Optimum SNR data compression in hardware using an Eigencoil array.
King, Scott B; Varosi, Steve M; Duensing, G Randy
2010-05-01
With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.
Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm
NASA Astrophysics Data System (ADS)
Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung
2016-07-01
In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
VizieR Online Data Catalog: delta Cep VEGA/CHARA observing log (Nardetto+, 2016)
NASA Astrophysics Data System (ADS)
Nardetto, N.; Merand, A.; Mourard, D.; Storm, J.; Gieren, W.; Fouque, P.; Gallenne, A.; Graczyk, D.; Kervella, P.; Neilson, H.; Pietrzynski, G.; Pilecki, B.; Breitfelder, J.; Berio, P.; Challouf, M.; Clausse, J.-M.; Ligi, R.; Mathias, P.; Meilland, A.; Perraut, K.; Poretti, E.; Rainer, M.; Spang, A.; Stee, P.; Tallon-Bosc, I.; Ten Brummelaar, T.
2016-07-01
The columns give, respectively, the date, the RJD, the hour angle (HA), the minimum and maximum wavelengths over which the squared visibility is calculated, the projected baseline length Bp and its orientation PA, the signal-to-noise ratio on the fringe peak; the last column provides the calibrated squared visibility V2 together with the statistic error on V2, and the systematic error on V2 (see text for details). The data are available on the Jean-Marie Mariotti Center OiDB service (Available at http://oidb.jmmc.fr). (1 data file).
A network application for modeling a centrifugal compressor performance map
NASA Astrophysics Data System (ADS)
Nikiforov, A.; Popova, D.; Soldatova, K.
2017-08-01
The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.
Effect of the image resolution on the statistical descriptors of heterogeneous media.
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
Effect of the image resolution on the statistical descriptors of heterogeneous media
NASA Astrophysics Data System (ADS)
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
Comparison of structural and least-squares lines for estimating geologic relations
Williams, G.P.; Troutman, B.M.
1990-01-01
Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.
2003-09-01
590-595, September 1996. Deitel , H.M., Deitel , P.J., Nieto, T.R., Lin, T.M., Sadhu, P., XML: How to Program , Prentice Hall, 2001. Du, Y...communications will result in a total track following error equal to the sum of the errors for the two vehicles........48 xv Figure 36. Test point programming ...Refer to (Hunter 2000), ( Deitel 2001), or similar references for additional information regarding the XML standard. Figure 17. XML example
A new approach for the estimation of phytoplankton cell counts associated with algal blooms.
Nazeer, Majid; Wong, Man Sing; Nichol, Janet Elizabeth
2017-07-15
This study proposes a method for estimating phytoplankton cell counts associated with an algal bloom, using satellite images coincident with in situ and meteorological parameters. Satellite images from Landsat Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), Operational Land Imager (OLI) and HJ-1 A/B Charge Couple Device (CCD) sensors were integrated with the meteorological observations to provide an estimate of phytoplankton cell counts. All images were atmospherically corrected using the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) atmospheric correction method with a possible error of 1.2%, 2.6%, 1.4% and 2.3% for blue (450-520nm), green (520-600nm), red (630-690nm) and near infrared (NIR 760-900nm) wavelengths, respectively. Results showed that the developed Artificial Neural Network (ANN) model yields a correlation coefficient (R) of 0.95 with the in situ validation data with Sum of Squared Error (SSE) of 0.34cell/ml, Mean Relative Error (MRE) of 0.154cells/ml and a bias of -504.87. The integration of the meteorological parameters with remote sensing observations provided a promising estimation of the algal scum as compared to previous studies. The applicability of the ANN model was tested over Hong Kong as well as over Lake Kasumigaura, Japan and Lake Okeechobee, Florida USA, where algal blooms were also reported. Further, a 40-year (1975-2014) red tide occurrence map was developed and revealed that the eastern and southern waters of Hong Kong are more vulnerable to red tides. Over the 40 years, 66% of red tide incidents were associated with the Dinoflagellates group, while the remainder were associated with the Diatom group (14%) and several other minor groups (20%). The developed technology can be applied to other similar environments in an efficient and cost-saving manner. Copyright © 2017 Elsevier B.V. All rights reserved.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.
1965-01-01
1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087
From least squares to multilevel modeling: A graphical introduction to Bayesian inference
NASA Astrophysics Data System (ADS)
Loredo, Thomas J.
2016-01-01
This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.
Yehia, Ali M; Mohamed, Heba M
2016-01-05
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.
CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries
NASA Technical Reports Server (NTRS)
Cutler, Andrew D.; Magnotti, Gaetano
2010-01-01
The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.
NASA Astrophysics Data System (ADS)
Ying, Yibin; Liu, Yande; Tao, Yang
2005-09-01
This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r) 0.940 for the SSC and a moderate r of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSC and also showed the feasibility of using it to predict the VA of apples.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...
Huang, Xinchuan; Schwenke, David W; Lee, Timothy J
2011-01-28
In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.
Lin, Chung-Ying; Broström, Anders; Nilsen, Per; Griffiths, Mark D; Pakpour, Amir H
2017-12-01
Background and aims The Bergen Social Media Addiction Scale (BSMAS), a six-item self-report scale that is a brief and effective psychometric instrument for assessing at-risk social media addiction on the Internet. However, its psychometric properties in Persian have never been examined and no studies have applied Rasch analysis for the psychometric testing. This study aimed to verify the construct validity of the Persian BSMAS using confirmatory factor analysis (CFA) and Rasch models among 2,676 Iranian adolescents. Methods In addition to construct validity, measurement invariance in CFA and differential item functioning (DIF) in Rasch analysis across gender were tested for in the Persian BSMAS. Results Both CFA [comparative fit index (CFI) = 0.993; Tucker-Lewis index (TLI) = 0.989; root mean square error of approximation (RMSEA) = 0.057; standardized root mean square residual (SRMR) = 0.039] and Rasch (infit MnSq = 0.88-1.28; outfit MnSq = 0.86-1.22) confirmed the unidimensionality of the BSMAS. Moreover, measurement invariance was supported in multigroup CFA including metric invariance (ΔCFI = -0.001; ΔSRMR = 0.003; ΔRMSEA = -0.005) and scalar invariance (ΔCFI = -0.002; ΔSRMR = 0.005; ΔRMSEA = 0.001) across gender. No item displayed DIF (DIF contrast = -0.48 to 0.24) in Rasch across gender. Conclusions Given the Persian BSMAS was unidimensional, it is concluded that the instrument can be used to assess how an adolescent is addicted to social media on the Internet. Moreover, users of the instrument may comfortably compare the sum scores of the BSMAS across gender.
NASA Astrophysics Data System (ADS)
Salawu, Emmanuel Oluwatobi; Hesse, Evelyn; Stopford, Chris; Davey, Neil; Sun, Yi
2017-11-01
Better understanding and characterization of cloud particles, whose properties and distributions affect climate and weather, are essential for the understanding of present climate and climate change. Since imaging cloud probes have limitations of optical resolution, especially for small particles (with diameter < 25 μm), instruments like the Small Ice Detector (SID) probes, which capture high-resolution spatial light scattering patterns from individual particles down to 1 μm in size, have been developed. In this work, we have proposed a method using Machine Learning techniques to estimate simulated particles' orientation-averaged projected sizes (PAD) and aspect ratio from their 2D scattering patterns. The two-dimensional light scattering patterns (2DLSP) of hexagonal prisms are computed using the Ray Tracing with Diffraction on Facets (RTDF) model. The 2DLSP cover the same angular range as the SID probes. We generated 2DLSP for 162 hexagonal prisms at 133 orientations for each. In a first step, the 2DLSP were transformed into rotation-invariant Zernike moments (ZMs), which are particularly suitable for analyses of pattern symmetry. Then we used ZMs, summed intensities, and root mean square contrast as inputs to the advanced Machine Learning methods. We created one random forests classifier for predicting prism orientation, 133 orientation-specific (OS) support vector classification models for predicting the prism aspect-ratios, 133 OS support vector regression models for estimating prism sizes, and another 133 OS Support Vector Regression (SVR) models for estimating the size PADs. We have achieved a high accuracy of 0.99 in predicting prism aspect ratios, and a low value of normalized mean square error of 0.004 for estimating the particle's size and size PADs.
Ketcha, M D; de Silva, T; Han, R; Uneri, A; Goerres, J; Jacobson, M; Vogt, S; Kleinszig, G; Siewerdsen, J H
2017-02-11
In image-guided procedures, image acquisition is often performed primarily for the task of geometrically registering information from another image dataset, rather than detection / visualization of a particular feature. While the ability to detect a particular feature in an image has been studied extensively with respect to image quality characteristics (noise, resolution) and is an ongoing, active area of research, comparatively little has been accomplished to relate such image quality characteristics to registration performance. To establish such a framework, we derived Cramer-Rao lower bounds (CRLB) for registration accuracy, revealing the underlying dependencies on image variance and gradient strength. The CRLB was analyzed as a function of image quality factors (in particular, dose) for various similarity metrics and compared to registration accuracy using CT images of an anthropomorphic head phantom at various simulated dose levels. Performance was evaluated in terms of root mean square error (RMSE) of the registration parameters. Analysis of the CRLB shows two primary dependencies: 1) noise variance (related to dose); and 2) sum of squared image gradients (related to spatial resolution and image content). Comparison of the measured RMSE to the CRLB showed that the best registration method, RMSE achieved the CRLB to within an efficiency factor of 0.21, and optimal estimators followed the predicted inverse proportionality between registration performance and radiation dose. Analysis of the CRLB for image registration is an important step toward understanding and evaluating an intraoperative imaging system with respect to a registration task. While the CRLB is optimistic in absolute performance, it reveals a basis for relating the performance of registration estimators as a function of noise content and may be used to guide acquisition parameter selection (e.g., dose) for purposes of intraoperative registration.
Fatemi, Mohammad Hossein; Ghorbanzad'e, Mehdi
2009-11-01
Quantitative structure-property relationship models for the prediction of the nematic transition temperature (T (N)) were developed by using multilinear regression analysis and a feedforward artificial neural network (ANN). A collection of 42 thermotropic liquid crystals was chosen as the data set. The data set was divided into three sets: for training, and an internal and external test set. Training and internal test sets were used for ANN model development, and the external test set was used for evaluation of the predictive power of the model. In order to build the models, a set of six descriptors were selected by the best multilinear regression procedure of the CODESSA program. These descriptors were: atomic charge weighted partial negatively charged surface area, relative negative charged surface area, polarity parameter/square distance, minimum most negative atomic partial charge, molecular volume, and the A component of moment of inertia, which encode geometrical and electronic characteristics of molecules. These descriptors were used as inputs to ANN. The optimized ANN model had 6:6:1 topology. The standard errors in the calculation of T (N) for the training, internal, and external test sets using the ANN model were 1.012, 4.910, and 4.070, respectively. To further evaluate the ANN model, a crossvalidation test was performed, which produced the statistic Q (2) = 0.9796 and standard deviation of 2.67 based on predicted residual sum of square. Also, the diversity test was performed to ensure the model's stability and prove its predictive capability. The obtained results reveal the suitability of ANN for the prediction of T (N) for liquid crystals using molecular structural descriptors.
NASA Astrophysics Data System (ADS)
He, Anhua; Singh, Ramesh P.; Sun, Zhaohua; Ye, Qing; Zhao, Gang
2016-07-01
The earth tide, atmospheric pressure, precipitation and earthquake fluctuations, especially earthquake greatly impacts water well levels, thus anomalous co-seismic changes in ground water levels have been observed. In this paper, we have used four different models, simple linear regression (SLR), multiple linear regression (MLR), principal component analysis (PCA) and partial least squares (PLS) to compute the atmospheric pressure and earth tidal effects on water level. Furthermore, we have used the Akaike information criterion (AIC) to study the performance of various models. Based on the lowest AIC and sum of squares for error values, the best estimate of the effects of atmospheric pressure and earth tide on water level is found using the MLR model. However, MLR model does not provide multicollinearity between inputs, as a result the atmospheric pressure and earth tidal response coefficients fail to reflect the mechanisms associated with the groundwater level fluctuations. On the premise of solving serious multicollinearity of inputs, PLS model shows the minimum AIC value. The atmospheric pressure and earth tidal response coefficients show close response with the observation using PLS model. The atmospheric pressure and the earth tidal response coefficients are found to be sensitive to the stress-strain state using the observed data for the period 1 April-8 June 2008 of Chuan 03# well. The transient enhancement of porosity of rock mass around Chuan 03# well associated with the Wenchuan earthquake (Mw = 7.9 of 12 May 2008) that has taken its original pre-seismic level after 13 days indicates that the co-seismic sharp rise of water well could be induced by static stress change, rather than development of new fractures.
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
Ambiguity resolution for satellite Doppler positioning systems
NASA Technical Reports Server (NTRS)
Argentiero, P.; Marini, J.
1979-01-01
The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
Basalekou, M.; Pappas, C.; Kotseridis, Y.; Tarantilis, P. A.; Kontaxakis, E.
2017-01-01
Color, phenolic content, and chemical age values of red wines made from Cretan grape varieties (Kotsifali, Mandilari) were evaluated over nine months of maturation in different containers for two vintages. The wines differed greatly on their anthocyanin profiles. Mid-IR spectra were also recorded with the use of a Fourier Transform Infrared Spectrophotometer in ZnSe disk mode. Analysis of Variance was used to explore the parameter's dependency on time. Determination models were developed for the chemical age indexes using Partial Least Squares (PLS) (TQ Analyst software) considering the spectral region 1830–1500 cm−1. The correlation coefficients (r) for chemical age index i were 0.86 for Kotsifali (Root Mean Square Error of Calibration (RMSEC) = 0.067, Root Mean Square Error of Prediction (RMSEP) = 0,115, and Root Mean Square Error of Validation (RMSECV) = 0.164) and 0.90 for Mandilari (RMSEC = 0.050, RMSEP = 0.040, and RMSECV = 0.089). For chemical age index ii the correlation coefficients (r) were 0.86 and 0.97 for Kotsifali (RMSEC 0.044, RMSEP = 0.087, and RMSECV = 0.214) and Mandilari (RMSEC = 0.024, RMSEP = 0.033, and RMSECV = 0.078), respectively. The proposed method is simpler, less time consuming, and more economical and does not require chemical reagents. PMID:29225994
Application of near-infrared spectroscopy for the rapid quality assessment of Radix Paeoniae Rubra
NASA Astrophysics Data System (ADS)
Zhan, Hao; Fang, Jing; Tang, Liying; Yang, Hongjun; Li, Hua; Wang, Zhuju; Yang, Bin; Wu, Hongwei; Fu, Meihong
2017-08-01
Near-infrared (NIR) spectroscopy with multivariate analysis was used to quantify gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra, and the feasibility to classify the samples originating from different areas was investigated. A new high-performance liquid chromatography method was developed and validated to analyze gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra as the reference. Partial least squares (PLS), principal component regression (PCR), and stepwise multivariate linear regression (SMLR) were performed to calibrate the regression model. Different data pretreatments such as derivatives (1st and 2nd), multiplicative scatter correction, standard normal variate, Savitzky-Golay filter, and Norris derivative filter were applied to remove the systematic errors. The performance of the model was evaluated according to the root mean square of calibration (RMSEC), root mean square error of prediction (RMSEP), root mean square error of cross-validation (RMSECV), and correlation coefficient (r). The results show that compared to PCR and SMLR, PLS had a lower RMSEC, RMSECV, and RMSEP and higher r for all the four analytes. PLS coupled with proper pretreatments showed good performance in both the fitting and predicting results. Furthermore, the original areas of Radix Paeoniae Rubra samples were partly distinguished by principal component analysis. This study shows that NIR with PLS is a reliable, inexpensive, and rapid tool for the quality assessment of Radix Paeoniae Rubra.
Koláčková, Pavla; Růžičková, Gabriela; Gregor, Tomáš; Šišperová, Eliška
2015-08-30
Calibration models for the Fourier transform-near infrared (FT-NIR) instrument were developed for quick and non-destructive determination of oil and fatty acids in whole achenes of milk thistle. Samples with a range of oil and fatty acid levels were collected and their transmittance spectra were obtained by the FT-NIR instrument. Based on these spectra and data gained by the means of the reference method - Soxhlet extraction and gas chromatography (GC) - calibration models were created by means of partial least square (PLS) regression analysis. Precision and accuracy of the calibration models was verified via the cross-validation of validation samples whose spectra were not part of the calibration model and also according to the root mean square error of prediction (RMSEP), root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV) and the validation coefficient of determination (R(2) ). R(2) for whole seeds were 0.96, 0.96, 0.83 and 0.67 and the RMSEP values were 0.76, 1.68, 1.24, 0.54 for oil, linoleic (C18:2), oleic (C18:1) and palmitic (C16:0) acids, respectively. The calibration models are appropriate for the non-destructive determination of oil and fatty acids levels in whole seeds of milk thistle. © 2014 Society of Chemical Industry.
da Silva, Fabiana E B; Flores, Érico M M; Parisotto, Graciele; Müller, Edson I; Ferrão, Marco F
2016-03-01
An alternative method for the quantification of sulphametoxazole (SMZ) and trimethoprim (TMP) using diffuse reflectance infrared Fourier-transform spectroscopy (DRIFTS) and partial least square regression (PLS) was developed. Interval Partial Least Square (iPLS) and Synergy Partial Least Square (siPLS) were applied to select a spectral range that provided the lowest prediction error in comparison to the full-spectrum model. Fifteen commercial tablet formulations and forty-nine synthetic samples were used. The ranges of concentration considered were 400 to 900 mg g-1SMZ and 80 to 240 mg g-1 TMP. Spectral data were recorded between 600 and 4000 cm-1 with a 4 cm-1 resolution by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The proposed procedure was compared to high performance liquid chromatography (HPLC). The results obtained from the root mean square error of prediction (RMSEP), during the validation of the models for samples of sulphamethoxazole (SMZ) and trimethoprim (TMP) using siPLS, demonstrate that this approach is a valid technique for use in quantitative analysis of pharmaceutical formulations. The selected interval algorithm allowed building regression models with minor errors when compared to the full spectrum PLS model. A RMSEP of 13.03 mg g-1for SMZ and 4.88 mg g-1 for TMP was obtained after the selection the best spectral regions by siPLS.
NASA Technical Reports Server (NTRS)
Choe, C. Y.; Tapley, B. D.
1975-01-01
A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.
A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Cheevatanarak, Suchittra
Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…
Phase modulation for reduced vibration sensitivity in laser-cooled clocks in space
NASA Technical Reports Server (NTRS)
Klipstein, W.; Dick, G.; Jefferts, S.; Walls, F.
2001-01-01
The standard interrogation technique in atomic beam clocks is square-wave frequency modulation (SWFM), which suffers a first order sensitivity to vibrations as changes in the transit time of the atoms translates to perceived frequency errors. Square-wave phase modulation (SWPM) interrogation eliminates sensitivity to this noise.
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Robustness study of the pseudo open-loop controller for multiconjugate adaptive optics.
Piatrou, Piotr; Gilles, Luc
2005-02-20
Robustness of the recently proposed "pseudo open-loop control" algorithm against various system errors has been investigated for the representative example of the Gemini-South 8-m telescope multiconjugate adaptive-optics system. The existing model to represent the adaptive-optics system with pseudo open-loop control has been modified to account for misalignments, noise and calibration errors in deformable mirrors, and wave-front sensors. Comparison with the conventional least-squares control model has been done. We show with the aid of both transfer-function pole-placement analysis and Monte Carlo simulations that POLC remains remarkably stable and robust against very large levels of system errors and outperforms in this respect least-squares control. Approximate stability margins as well as performance metrics such as Strehl ratios and rms wave-front residuals averaged over a 1-arc min field of view have been computed for different types and levels of system errors to quantify the expected performance degradation.
Registration of pencil beam proton radiography data with X-ray CT.
Deffet, Sylvain; Macq, Benoît; Righetto, Roberto; Vander Stappen, François; Farace, Paolo
2017-10-01
Proton radiography seems to be a promising tool for assessing the quality of the stopping power computation in proton therapy. However, range error maps obtained on the basis of proton radiographs are very sensitive to small misalignment between the planning CT and the proton radiography acquisitions. In order to be able to mitigate misalignment in postprocessing, the authors implemented a fast method for registration between pencil proton radiography data obtained with a multilayer ionization chamber (MLIC) and an X-ray CT acquired on a head phantom. The registration was performed by optimizing a cost function which performs a comparison between the acquired data and simulated integral depth-dose curves. Two methodologies were considered, one based on dual orthogonal projections and the other one on a single projection. For each methodology, the robustness of the registration algorithm with respect to three confounding factors (measurement noise, CT calibration errors, and spot spacing) was investigated by testing the accuracy of the method through simulations based on a CT scan of a head phantom. The present registration method showed robust convergence towards the optimal solution. For the level of measurement noise and the uncertainty in the stopping power computation expected in proton radiography using a MLIC, the accuracy appeared to be better than 0.3° for angles and 0.3 mm for translations by use of the appropriate cost function. The spot spacing analysis showed that a spacing larger than the 5 mm used by other authors for the investigation of a MLIC for proton radiography led to results with absolute accuracy better than 0.3° for angles and 1 mm for translations when orthogonal proton radiographs were fed into the algorithm. In the case of a single projection, 6 mm was the largest spot spacing presenting an acceptable registration accuracy. For registration of proton radiography data with X-ray CT, the use of a direct ray-tracing algorithm to compute sums of squared differences and corrections of range errors showed very good accuracy and robustness with respect to three confounding factors: measurement noise, calibration error, and spot spacing. It is therefore a suitable algorithm to use in the in vivo range verification framework, allowing to separate in postprocessing the proton range uncertainty due to setup errors from the other sources of uncertainty. © 2017 American Association of Physicists in Medicine.
A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields
NASA Astrophysics Data System (ADS)
Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang
2017-03-01
Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.
Analysis of S-box in Image Encryption Using Root Mean Square Error Method
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-07-01
The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Error-Based Design Space Windowing
NASA Technical Reports Server (NTRS)
Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman
2002-01-01
Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.