Sample records for squared error criterion

  1. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  2. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  3. Validating Clusters with the Lower Bound for Sum-of-Squares Error

    ERIC Educational Resources Information Center

    Steinley, Douglas

    2007-01-01

    Given that a minor condition holds (e.g., the number of variables is greater than the number of clusters), a nontrivial lower bound for the sum-of-squares error criterion in K-means clustering is derived. By calculating the lower bound for several different situations, a method is developed to determine the adequacy of cluster solution based on…

  4. Criterion Predictability: Identifying Differences Between [r-squares

    ERIC Educational Resources Information Center

    Malgady, Robert G.

    1976-01-01

    An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)

  5. Cascade control of superheated steam temperature with neuro-PID controller.

    PubMed

    Zhang, Jianhua; Zhang, Fenfang; Ren, Mifeng; Hou, Guolian; Fang, Fang

    2012-11-01

    In this paper, an improved cascade control methodology for superheated processes is developed, in which the primary PID controller is implemented by neural networks trained by minimizing error entropy criterion. The entropy of the tracking error can be estimated recursively by utilizing receding horizon window technique. The measurable disturbances in superheated processes are input to the neuro-PID controller besides the sequences of tracking error in outer loop control system, hence, feedback control is combined with feedforward control in the proposed neuro-PID controller. The convergent condition of the neural networks is analyzed. The implementation procedures of the proposed cascade control approach are summarized. Compared with the neuro-PID controller using minimizing squared error criterion, the proposed neuro-PID controller using minimizing error entropy criterion may decrease fluctuations of the superheated steam temperature. A simulation example shows the advantages of the proposed method. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  6. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  7. Criterion for estimation of stress-deformed state of SD-materials

    NASA Astrophysics Data System (ADS)

    Orekhov, Andrey V.

    2018-05-01

    A criterion is proposed that determines the moment when the growth pattern of the monotonic numerical sequence varies from the linear to the parabolic one. The criterion is based on the comparison of squares of errors for the linear and the incomplete quadratic approximation. The approximating functions are constructed locally, only at those points that are located near a possible change in nature of the increase in the sequence.

  8. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  9. Maximum correntropy square-root cubature Kalman filter with application to SINS/GPS integrated systems.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng

    2018-05-31

    For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  11. Joint Transmitter and Receiver Power Allocation under Minimax MSE Criterion with Perfect and Imperfect CSI for MC-CDMA Transmissions

    NASA Astrophysics Data System (ADS)

    Kotchasarn, Chirawat; Saengudomlert, Poompat

    We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.

  12. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  13. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  14. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  15. A survey of quality measures for gray-scale image compression

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.

    1993-01-01

    Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.

  16. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  17. On a stronger-than-best property for best prediction

    NASA Astrophysics Data System (ADS)

    Teunissen, P. J. G.

    2008-03-01

    The minimum mean squared error (MMSE) criterion is a popular criterion for devising best predictors. In case of linear predictors, it has the advantage that no further distributional assumptions need to be made, other then about the first- and second-order moments. In the spatial and Earth sciences, it is the best linear unbiased predictor (BLUP) that is used most often. Despite the fact that in this case only the first- and second-order moments need to be known, one often still makes statements about the complete distribution, in particular when statistical testing is involved. For such cases, one can do better than the BLUP, as shown in Teunissen (J Geod. doi: 10.1007/s00190-007-0140-6, 2006), and thus devise predictors that have a smaller MMSE than the BLUP. Hence, these predictors are to be preferred over the BLUP, if one really values the MMSE-criterion. In the present contribution, we will show, however, that the BLUP has another optimality property than the MMSE-property, provided that the distribution is Gaussian. It will be shown that in the Gaussian case, the prediction error of the BLUP has the highest possible probability of all linear unbiased predictors of being bounded in the weighted squared norm sense. This is a stronger property than the often advertised MMSE-property of the BLUP.

  18. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  19. Choosing the Number of Clusters in K-Means Clustering

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    Steinley (2007) provided a lower bound for the sum-of-squares error criterion function used in K-means clustering. In this article, on the basis of the lower bound, the authors propose a method to distinguish between 1 cluster (i.e., a single distribution) versus more than 1 cluster. Additionally, conditional on indicating there are multiple…

  20. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  1. A simple method for processing data with least square method

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning

    2017-08-01

    The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.

  2. Methods of evaluating the effects of coding on SAR data

    NASA Technical Reports Server (NTRS)

    Dutkiewicz, Melanie; Cumming, Ian

    1993-01-01

    It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important.

  3. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  4. Closed-loop carrier phase synchronization techniques motivated by likelihood functions

    NASA Technical Reports Server (NTRS)

    Tsou, H.; Hinedi, S.; Simon, M.

    1994-01-01

    This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.

  5. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  6. Seasonality and Trend Forecasting of Tuberculosis Prevalence Data in Eastern Cape, South Africa, Using a Hybrid Model.

    PubMed

    Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin

    2016-07-26

    Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.

  7. Assessing Fit and Dimensionality in Least Squares Metric Multidimensional Scaling Using Akaike's Information Criterion

    ERIC Educational Resources Information Center

    Ding, Cody S.; Davison, Mark L.

    2010-01-01

    Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…

  8. Validation of the Hong Kong Liver Cancer Staging System in Determining Prognosis of the North American Patients Following Intra-arterial Therapy.

    PubMed

    Sohn, Jae Ho; Duran, Rafael; Zhao, Yan; Fleckenstein, Florian; Chapiro, Julius; Sahu, Sonia; Schernthaner, Rüdiger E; Qian, Tianchen; Lee, Howard; Zhao, Li; Hamilton, James; Frangakis, Constantine; Lin, MingDe; Salem, Riad; Geschwind, Jean-Francois

    2017-05-01

    There is debate over the best way to stage hepatocellular carcinoma (HCC). We attempted to validate the prognostic and clinical utility of the recently developed Hong Kong Liver Cancer (HKLC) staging system, a hepatitis B-based model, and compared data with that from the Barcelona Clinic Liver Cancer (BCLC) staging system in a North American population that underwent intra-arterial therapy (IAT). We performed a retrospective analysis of data from 1009 patients with HCC who underwent IAT from 2000 through 2014. Most patients had hepatitis C or unresectable tumors; all patients underwent IAT, with or without resection, transplantation, and/or systemic chemotherapy. We calculated HCC stage for each patient using 5-stage HKLC (HKLC-5) and 9-stage HKLC (HKLC-9) system classifications, and the BCLC system. Survival information was collected up until the end of 2014 at which point living or unconfirmed patients were censored. We compared performance of the BCLC, HKLC-5, and HKLC-9 systems in predicting patient outcomes using Kaplan-Meier estimates, calibration plots, C statistic, Akaike information criterion, and the likelihood ratio test. Median overall survival time, calculated from first IAT until date of death or censorship, for the entire cohort (all stages) was 9.8 months. The BCLC and HKLC staging systems predicted patient survival times with significance (P < .001). HKLC-5 and HKLC-9 each demonstrated good calibration. The HKLC-5 system outperformed the BCLC system in predicting patient survival times (HKLC C = 0.71, Akaike information criterion = 6242; BCLC C = 0.64, Akaike information criterion = 6320), reducing error in predicting survival time (HKLC reduced error by 14%, BCLC reduced error by 12%), and homogeneity (HKLC chi-square = 201, P < .001; BCLC chi-square = 119, P < .001) and monotonicity (HKLC linear trend chi-square = 193, P < .001; BCLC linear trend chi-square = 111, P < .001). Small proportions of patients with HCC of stages IV or V, according to the HKLC system, survived for 6 months and 4 months, respectively. In a retrospective analysis of patients who underwent IAT for unresectable HCC, we found the HKLC-5 staging system to have the best combination of performances in survival separation, calibration, and discrimination; it consistently outperformed the BCLC system in predicting survival times of patients. The HKLC system identified patients with HCC of stages IV and V who are unlikely to benefit from IAT. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  9. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    NASA Technical Reports Server (NTRS)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  10. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    PubMed

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  11. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  12. Controls of channel morphology and sediment concentration on flow resistance in a large sand-bed river: A case study of the lower Yellow River

    NASA Astrophysics Data System (ADS)

    Ma, Yuanxu; Huang, He Qing

    2016-07-01

    Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.

  13. Flat-fielding of Solar Hα Observations Based on the Maximum Correntropy Criterion

    NASA Astrophysics Data System (ADS)

    Xu, Gao-Gui; Zheng, Sheng; Lin, Gang-Hua; Wang, Xiao-Fan

    2016-08-01

    The flat-field CCD calibration method of Kuhn et al. (KLL) is an efficient method for flat-fielding. However, since it depends on the minimum of the sum of squares error (SSE), its solution is sensitive to noise, especially non-Gaussian noise. In this paper, a new algorithm is proposed to determine the flat field. The idea is to change the criterion of gain estimate from SSE to the maximum correntropy. The result of a test on simulated data demonstrates that our method has a higher accuracy and a faster convergence than KLL’s and Chae’s. It has been found that the method effectively suppresses noise, especially in the case of typical non-Gaussian noise. And the computing time of our algorithm is the shortest.

  14. The Problems of Multiple Feedback Estimation.

    ERIC Educational Resources Information Center

    Bulcock, Jeffrey W.

    The use of two-stage least squares (2SLS) for the estimation of feedback linkages is inappropriate for nonorthogonal data sets because 2SLS is extremely sensitive to multicollinearity. It is argued that what is needed is use of a different estimating criterion than the least squares criterion. Theoretically the variance normalization criterion has…

  15. An improved partial least-squares regression method for Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  16. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  17. Stabilized high-accuracy correction of ocular aberrations with liquid crystal on silicon spatial light modulator in adaptive optics retinal imaging system.

    PubMed

    Huang, Hongxin; Inoue, Takashi; Tanaka, Hiroshi

    2011-08-01

    We studied the long-term optical performance of an adaptive optics scanning laser ophthalmoscope that uses a liquid crystal on silicon spatial light modulator to correct ocular aberrations. The system achieved good compensation of aberrations while acquiring images of fine retinal structures, excepting during sudden eye movements. The residual wavefront aberrations collected over several minutes in several situations were statistically analyzed. The mean values of the root-mean-square residual wavefront errors were 23-30 nm, and for around 91-94% of the effective time the errors were below the Marechal criterion for diffraction limited imaging. The ability to axially shift the imaging plane to different retinal depths was also demonstrated.

  18. A methodology based on reduced complexity algorithm for system applications using microprocessors

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1988-01-01

    The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.

  19. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  20. Validation of test-day models for genetic evaluation of dairy goats in Norway.

    PubMed

    Andonov, S; Ødegård, J; Boman, I A; Svendsen, M; Holme, I J; Adnøy, T; Vukovic, V; Klemetsdal, G

    2007-10-01

    Test-day data for daily milk yield and fat, protein, and lactose content were sampled from the years 1988 to 2003 in 17 flocks belonging to 2 genetically well-tied buck circles. In total, records from 2,111 to 2,215 goats for content traits and 2,371 goats for daily milk yield were included in the analysis, averaging 2.6 and 4.8 observations per goat for the 2 groups of traits, respectively. The data were analyzed by using 4 test-day models with different modeling of fixed effects. Model [0] (the reference model) contained a fixed effect of year-season of kidding with regression on Ali-Schaeffer polynomials nested within the year-season classes, and a random effect of flock test-day. In model [1], the lactation curve effect from model [0] was replaced by a fixed effect of days in milk (in 3-d periods), the same for all year-seasons of kidding. Models [2] and [3] were obtained from model [1] by removing the fixed year-season of kidding effect and considering the flock test-day effect as either fixed or random, respectively. The models were compared by using 2 criteria: mean-squared error of prediction and a test of bias affecting the genetic trend. The first criterion indicated a preference for model [3], whereas the second criterion preferred model [1]. Mean-squared error of prediction is based on model fit, whereas the second criterion tests the ability of the model to produce unbiased genetic evaluation (i.e., its capability of separating environmental and genetic time trends). Thus, a fixed structure with year (year, year-season, or possibly flock-year) was indicated to appropriately separate time trends. Heritability estimates for daily milk yield and milk content were 0.26 and 0.24 to 0.27, respectively.

  1. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  2. MIMO equalization with adaptive step size for few-mode fiber transmission systems.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J

    2014-01-13

    Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.

  3. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Statistical variability comparison in MODIS and AERONET derived aerosol optical depth over Indo-Gangetic Plains using time series modeling.

    PubMed

    Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant

    2016-05-15

    A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Environmental fate model for ultra-low-volume insecticide applications used for adult mosquito management

    USGS Publications Warehouse

    Schleier, Jerome J.; Peterson, Robert K.D.; Irvine, Kathryn M.; Marshall, Lucy M.; Weaver, David K.; Preftakes, Collin J.

    2012-01-01

    One of the more effective ways of managing high densities of adult mosquitoes that vector human and animal pathogens is ultra-low-volume (ULV) aerosol applications of insecticides. The U.S. Environmental Protection Agency uses models that are not validated for ULV insecticide applications and exposure assumptions to perform their human and ecological risk assessments. Currently, there is no validated model that can accurately predict deposition of insecticides applied using ULV technology for adult mosquito management. In addition, little is known about the deposition and drift of small droplets like those used under conditions encountered during ULV applications. The objective of this study was to perform field studies to measure environmental concentrations of insecticides and to develop a validated model to predict the deposition of ULV insecticides. The final regression model was selected by minimizing the Bayesian Information Criterion and its prediction performance was evaluated using k-fold cross validation. Density of the formulation and the density and CMD interaction coefficients were the largest in the model. The results showed that as density of the formulation decreases, deposition increases. The interaction of density and CMD showed that higher density formulations and larger droplets resulted in greater deposition. These results are supported by the aerosol physics literature. A k-fold cross validation demonstrated that the mean square error of the selected regression model is not biased, and the mean square error and mean square prediction error indicated good predictive ability.

  6. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  7. New equations improve NIR prediction of body fat among high school wrestlers.

    PubMed

    Oppliger, R A; Clark, R R; Nielsen, D H

    2000-09-01

    Methodologic study to derive prediction equations for percent body fat (%BF). To develop valid regression equations using NIR to assess body composition among high school wrestlers. Clinicians need a portable, fast, and simple field method for assessing body composition among wrestlers. Near-infrared photospectrometry (NIR) meets these criteria, but its efficacy has been challenged. Subjects were 150 high school wrestlers from 2 Midwestern states with mean +/- SD age of 16.3 +/- 1.1 yrs, weight of 69.5 +/- 11.7 kg, and height of 174.4 +/- 7.0 cm. Relative body fatness (%BF) determined from hydrostatic weighing was the criterion measure, and NIR optical density (OD) measurements at multiple sites, plus height, weight, and body mass index (BMI) were the predictor variables. Four equations were developed with multiple R2s that varied from .530 to .693, root mean squared errors varied from 2.8% BF to 3.4% BF, and prediction errors varied from 2.9% BF to 3.1% BF. The best equation used OD measurements at the biceps, triceps, and thigh sites, BMI, and age. The root mean squared error and prediction error for all 4 equations were equal to or smaller than for a skinfold equation commonly used with wrestlers. The results substantiate the validity of NIR for predicting % BF among high school wrestlers. Cross-validation of these equations is warranted.

  8. A stopping criterion for the iterative solution of partial differential equations

    NASA Astrophysics Data System (ADS)

    Rao, Kaustubh; Malan, Paul; Perot, J. Blair

    2018-01-01

    A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.

  9. Forecasting typhoid fever incidence in the Cordillera administrative region in the Philippines using seasonal ARIMA models

    NASA Astrophysics Data System (ADS)

    Cawiding, Olive R.; Natividad, Gina May R.; Bato, Crisostomo V.; Addawe, Rizavel C.

    2017-11-01

    The prevalence of typhoid fever in developing countries such as the Philippines calls for a need for accurate forecasting of the disease. This will be of great assistance in strategic disease prevention. This paper presents a development of useful models that predict the behavior of typhoid fever incidence based on the monthly incidence in the provinces of the Cordillera Administrative Region from 2010 to 2015 using univariate time series analysis. The data used was obtained from the Cordillera Office of the Department of Health (DOH-CAR). Seasonal autoregressive moving average (SARIMA) models were used to incorporate the seasonality of the data. A comparison of the results of the obtained models revealed that the SARIMA (1,1,7)(0,0,1)12 with a fixed coefficient at the seventh lag produces the smallest root mean square error (RMSE), mean absolute error (MAE), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC). The model suggested that for the year 2016, the number of cases would increase from the months of July to September and have a drop in December. This was then validated using the data collected from January 2016 to December 2016.

  10. Prediction of fracture initiation in square cup drawing of DP980 using an anisotropic ductile fracture criterion

    NASA Astrophysics Data System (ADS)

    Park, N.; Huh, H.; Yoon, J. W.

    2017-09-01

    This paper deals with the prediction of fracture initiation in square cup drawing of DP980 steel sheet with the thickness of 1.2 mm. In an attempt to consider the influence of material anisotropy on the fracture initiation, an uncoupled anisotropic ductile fracture criterion is developed based on the Lou—Huh ductile fracture criterion. Tensile tests are carried out at different loading directions of 0°, 45°, and 90° to the rolling direction of the sheet using various specimen geometries including pure shear, dog-bone, and flat grooved specimens so as to calibrate the parameters of the proposed fracture criterion. Equivalent plastic strain distribution on the specimen surface is computed using Digital Image Correlation (DIC) method until surface crack initiates. The proposed fracture criterion is implemented into the commercial finite element code ABAQUS/Explicit by developing the Vectorized User-defined MATerial (VUMAT) subroutine which features the non-associated flow rule. Simulation results of the square cup drawing test clearly show that the proposed fracture criterion is capable of predicting the fracture initiation with sufficient accuracy considering the material anisotropy.

  11. Statistically Self-Consistent and Accurate Errors for SuperDARN Data

    NASA Astrophysics Data System (ADS)

    Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.

    2018-01-01

    The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.

  12. Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He

    2013-09-01

    In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.

  13. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.

  14. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  15. Volatilization of organic compounds from streams

    USGS Publications Warehouse

    Rathburn, R.E.; Tai, D.Y.

    1982-01-01

    Mass-transfer coefficients for the volatilization of ethylene and propane were correlated with the hydraulic and geometric properties of seven streams, and predictive equations were developed. The equations were evaluated using a normalized root-mean-square error as the criterion of comparison. The two best equations were a two-variable equation containing the energy dissipated per unit mass per unit time and the average depth of flow and a three-variable equation containing the average velocity, the average depth of flow, and the slope of the stream. Procedures for adjusting the ethylene and propane coefficients for other organic compounds were evaluated. These procedures are based on molecular diffusivity, molecular diameter, or molecular weight. Because of limited data, none of these procedures have been extensively verified. Therefore, until additional data become available, it is suggested that the mass-transfer coefficient be assumed to be inversely proportional to the square root of the molecular weight.

  16. Smoothness of In vivo Spectral Baseline Determined by Mean Squared Error

    PubMed Central

    Zhang, Yan; Shen, Jun

    2013-01-01

    Purpose A nonparametric smooth line is usually added to spectral model to account for background signals in vivo magnetic resonance spectroscopy (MRS). The assumed smoothness of the baseline significantly influences quantitative spectral fitting. In this paper, a method is proposed to minimize baseline influences on estimated spectral parameters. Methods In this paper, the non-parametric baseline function with a given smoothness was treated as a function of spectral parameters. Its uncertainty was measured by root-mean-squared error (RMSE). The proposed method was demonstrated with a simulated spectrum and in vivo spectra of both short echo time (TE) and averaged echo times. The estimated in vivo baselines were compared with the metabolite-nulled spectra, and the LCModel-estimated baselines. The accuracies of estimated baseline and metabolite concentrations were further verified by cross-validation. Results An optimal smoothness condition was found that led to the minimal baseline RMSE. In this condition, the best fit was balanced against minimal baseline influences on metabolite concentration estimates. Conclusion Baseline RMSE can be used to indicate estimated baseline uncertainties and serve as the criterion for determining the baseline smoothness of in vivo MRS. PMID:24259436

  17. Criterion for evaluating the predictive ability of nonlinear regression models without cross-validation.

    PubMed

    Kaneko, Hiromasa; Funatsu, Kimito

    2013-09-23

    We propose predictive performance criteria for nonlinear regression models without cross-validation. The proposed criteria are the determination coefficient and the root-mean-square error for the midpoints between k-nearest-neighbor data points. These criteria can be used to evaluate predictive ability after the regression models are updated, whereas cross-validation cannot be performed in such a situation. The proposed method is effective and helpful in handling big data when cross-validation cannot be applied. By analyzing data from numerical simulations and quantitative structural relationships, we confirm that the proposed criteria enable the predictive ability of the nonlinear regression models to be appropriately quantified.

  18. A negentropy minimization approach to adaptive equalization for digital communication systems.

    PubMed

    Choi, Sooyong; Lee, Te-Won

    2004-07-01

    In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.

  19. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  20. Discriminability limits in spatio-temporal stereo block matching.

    PubMed

    Jain, Ankit K; Nguyen, Truong Q

    2014-05-01

    Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.

  1. A novel SURE-based criterion for parametric PSF estimation.

    PubMed

    Xue, Feng; Blu, Thierry

    2015-02-01

    We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.

  2. Quantitative comparison of the absorption spectra of the gas mixtures in analogy to the criterion of Pearson

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Kuzmin, D. A.; Sandykova, E. A.; Shapovalov, A. V.

    2015-11-01

    An approach to the reduction of the space of the absorption spectra, based on the original criterion for profile analysis of the spectra, was proposed. This criterion dates back to the known statistics chi-square test of Pearson. Introduced criterion allows to quantify the differences of spectral curves.

  3. Joint Frequency-Domain Equalization and Despreading for Multi-Code DS-CDMA Using Cyclic Delay Transmit Diversity

    NASA Astrophysics Data System (ADS)

    Yamamoto, Tetsuya; Takeda, Kazuki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. To further improve the BER performance, cyclic delay transmit diversity (CDTD) can be used. CDTD simultaneously transmits the same signal from different antennas after adding different cyclic delays to increase the number of equivalent propagation paths. Although a joint use of CDTD and MMSE-FDE for direct sequence code division multiple access (DS-CDMA) achieves larger frequency diversity gain, the BER performance improvement is limited by the residual inter-chip interference (ICI) after FDE. In this paper, we propose joint FDE and despreading for DS-CDMA using CDTD. Equalization and despreading are simultaneously performed in the frequency-domain to suppress the residual ICI after FDE. A theoretical conditional BER analysis is presented for the given channel condition. The BER analysis is confirmed by computer simulation.

  4. Robust pupil center detection using a curvature algorithm

    NASA Technical Reports Server (NTRS)

    Zhu, D.; Moore, S. T.; Raphan, T.; Wall, C. C. (Principal Investigator)

    1999-01-01

    Determining the pupil center is fundamental for calculating eye orientation in video-based systems. Existing techniques are error prone and not robust because eyelids, eyelashes, corneal reflections or shadows in many instances occlude the pupil. We have developed a new algorithm which utilizes curvature characteristics of the pupil boundary to eliminate these artifacts. Pupil center is computed based solely on points related to the pupil boundary. For each boundary point, a curvature value is computed. Occlusion of the boundary induces characteristic peaks in the curvature function. Curvature values for normal pupil sizes were determined and a threshold was found which together with heuristics discriminated normal from abnormal curvature. Remaining boundary points were fit with an ellipse using a least squares error criterion. The center of the ellipse is an estimate of the pupil center. This technique is robust and accurately estimates pupil center with less than 40% of the pupil boundary points visible.

  5. An Empirical Model Building Criterion Based on Prediction with Applications in Parametric Cost Estimation.

    DTIC Science & Technology

    1980-08-01

    varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of

  6. The evaluation of different forest structural indices to predict the stand aboveground biomass of even-aged Scotch pine (Pinus sylvestris L.) forests in Kunduz, Northern Turkey.

    PubMed

    Ercanli, İlker; Kahriman, Aydın

    2015-03-01

    We assessed the effect of stand structural diversity, including the Shannon, improved Shannon, Simpson, McIntosh, Margelef, and Berger-Parker indices, on stand aboveground biomass (AGB) and developed statistical prediction models for the stand AGB values, including stand structural diversity indices and some stand attributes. The AGB prediction model, including only stand attributes, accounted for 85 % of the total variance in AGB (R (2)) with an Akaike's information criterion (AIC) of 807.2407, Bayesian information criterion (BIC) of 809.5397, Schwarz Bayesian criterion (SBC) of 818.0426, and root mean square error (RMSE) of 38.529 Mg. After inclusion of the stand structural diversity into the model structure, considerable improvement was observed in statistical accuracy, including 97.5 % of the total variance in AGB, with an AIC of 614.1819, BIC of 617.1242, SBC of 633.0853, and RMSE of 15.8153 Mg. The predictive fitting results indicate that some indices describing the stand structural diversity can be employed as significant independent variables to predict the AGB production of the Scotch pine stand. Further, including the stand diversity indices in the AGB prediction model with the stand attributes provided important predictive contributions in estimating the total variance in AGB.

  7. Natural learning in NLDA networks.

    PubMed

    González, Ana; Dorronsoro, José R

    2007-07-01

    Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.

  8. LS Channel Estimation and Signal Separation for UHF RFID Tag Collision Recovery on the Physical Layer.

    PubMed

    Duan, Hanjun; Wu, Haifeng; Zeng, Yu; Chen, Yuebin

    2016-03-26

    In a passive ultra-high frequency (UHF) radio-frequency identification (RFID) system, tag collision is generally resolved on a medium access control (MAC) layer. However, some of collided tag signals could be recovered on a physical (PHY) layer and, thus, enhance the identification efficiency of the RFID system. For the recovery on the PHY layer, channel estimation is a critical issue. Good channel estimation will help to recover the collided signals. Existing channel estimates work well for two collided tags. When the number of collided tags is beyond two, however, the existing estimates have more estimation errors. In this paper, we propose a novel channel estimate for the UHF RFID system. It adopts an orthogonal matrix based on the information of preambles which is known for a reader and applies a minimum-mean-square-error (MMSE) criterion to estimate channels. From the estimated channel, we could accurately separate the collided signals and recover them. By means of numerical results, we show that the proposed estimate has lower estimation errors and higher separation efficiency than the existing estimates.

  9. Road traffic accidents prediction modelling: An analysis of Anambra State, Nigeria.

    PubMed

    Ihueze, Chukwutoo C; Onwurah, Uchendu O

    2018-03-01

    One of the major problems in the world today is the rate of road traffic crashes and deaths on our roads. Majority of these deaths occur in low-and-middle income countries including Nigeria. This study analyzed road traffic crashes in Anambra State, Nigeria with the intention of developing accurate predictive models for forecasting crash frequency in the State using autoregressive integrated moving average (ARIMA) and autoregressive integrated moving average with explanatory variables (ARIMAX) modelling techniques. The result showed that ARIMAX model outperformed the ARIMA (1,1,1) model generated when their performances were compared using the lower Bayesian information criterion, mean absolute percentage error, root mean square error; and higher coefficient of determination (R-Squared) as accuracy measures. The findings of this study reveal that incorporating human, vehicle and environmental related factors in time series analysis of crash dataset produces a more robust predictive model than solely using aggregated crash count. This study contributes to the body of knowledge on road traffic safety and provides an approach to forecasting using many human, vehicle and environmental factors. The recommendations made in this study if applied will help in reducing the number of road traffic crashes in Nigeria. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. 10 CFR 140.84 - Criterion I-Substantial discharge of radioactive material or substantial radiation levels offsite.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Commission finds that: (1) Surface contamination of at least a total of any 100 square meters of offsite... square meter 0.35 microcuries per square meter. Alpha emission from isotopes other than transuranic isotopes 35 microcuries per square meter 3.5 microcuries per square meter. Beta or gamma mission 40...

  11. 10 CFR 140.84 - Criterion I-Substantial discharge of radioactive material or substantial radiation levels offsite.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Commission finds that: (1) Surface contamination of at least a total of any 100 square meters of offsite... square meter 0.35 microcuries per square meter. Alpha emission from isotopes other than transuranic isotopes 35 microcuries per square meter 3.5 microcuries per square meter. Beta or gamma mission 40...

  12. Modeling and control of non-square MIMO system using relay feedback.

    PubMed

    Kalpana, D; Thyagarajan, T; Gokulraj, N

    2015-11-01

    This paper proposes a systematic approach for the modeling and control of non-square MIMO systems in time domain using relay feedback. Conventionally, modeling, selection of the control configuration and controller design of non-square MIMO systems are performed using input/output information of direct loop, while the output of undesired responses that bears valuable information on interaction among the loops are not considered. However, in this paper, the undesired response obtained from relay feedback test is also taken into consideration to extract the information about the interaction between the loops. The studies are performed on an Air Path Scheme of Turbocharged Diesel Engine (APSTDE) model, which is a typical non-square MIMO system, with input and output variables being 3 and 2 respectively. From the relay test response, the generalized analytical expressions are derived and these analytical expressions are used to estimate unknown system parameters and also to evaluate interaction measures. The interaction is analyzed by using Block Relative Gain (BRG) method. The model thus identified is later used to design appropriate controller to carry out closed loop studies. Closed loop simulation studies were performed for both servo and regulatory operations. Integral of Squared Error (ISE) performance criterion is employed to quantitatively evaluate performance of the proposed scheme. The usefulness of the proposed method is demonstrated on a lab-scale Two-Tank Cylindrical Interacting System (TTCIS), which is configured as a non-square system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. 10 CFR 840.4 - Criterion I-Substantial discharge of radioactive material or substantial radiation levels offsite.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    .... (b) DOE finds that— (1) Surface contamination of at least a total of any 100 square meters of offsite... microcuries per square meter 0.35 microcuries per square meter. Alpha emission from isotopes other than transuranic isotopes 35 microcuries per square meter 3.5 microcuries per square meter. Beta or gamma emission...

  14. Linear mixed-effects models to describe individual tree crown width for China-fir in Fujian Province, southeast China.

    PubMed

    Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu

    2015-01-01

    A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.

  15. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy.

    PubMed

    Verveer, P. J; Gemkow, M. J; Jovin, T. M

    1999-01-01

    We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented.

  16. Measurement of the inertial properties of the Helios F-1 spacecraft

    NASA Technical Reports Server (NTRS)

    Gayman, W. H.

    1975-01-01

    A gravity pendulum method of measuring lateral moments of inertia of large structures with an error of less than 1% is outlined. The method is based on the fact that in a physical pendulum with a knife-edge support the distance from the axis of rotation to the system center of gravity determines the minimal period of oscillation and is equal to the system centroidal radius of gyration. The method is applied to results of a test procedure in which the Helios F-1 spacecraft was placed in a roll fixture with crossed flexure pivots as elastic constraints and system oscillation measurements were made with each of a set of added moment-of-inertia increments. Equations of motion are derived with allowance for the effect of the finite pivot radius and an error analysis is carried out to find the criterion for maximum accuracy in determining the square of the centroidal radius of gyration. The test procedure allows all measurements to be made with the specimen in upright position.

  17. Wireless visual sensor network resource allocation using cross-layer optimization

    NASA Astrophysics Data System (ADS)

    Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.

    2009-01-01

    In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.

  18. Analysing the accuracy of machine learning techniques to develop an integrated influent time series model: case study of a sewage treatment plant, Malaysia.

    PubMed

    Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed

    2018-04-01

    The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.

  19. Computation of forces from deformed visco-elastic biological tissues

    NASA Astrophysics Data System (ADS)

    Muñoz, José J.; Amat, David; Conte, Vito

    2018-04-01

    We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.

  20. Development and Initial Validation of the Multicultural Personality Inventory (MPI).

    PubMed

    Ponterotto, Joseph G; Fietzer, Alexander W; Fingerhut, Esther C; Woerner, Scott; Stack, Lauren; Magaldi-Dopman, Danielle; Rust, Jonathan; Nakao, Gen; Tsai, Yu-Ting; Black, Natasha; Alba, Renaldo; Desai, Miraj; Frazier, Chantel; LaRue, Alyse; Liao, Pei-Wen

    2014-01-01

    Two studies summarize the development and initial validation of the Multicultural Personality Inventory (MPI). In Study 1, the 115-item prototype MPI was administered to 415 university students where exploratory factor analysis resulted in a 70-item, 7-factor model. In Study 2, the 70-item MPI and theoretically related companion instruments were administered to a multisite sample of 576 university students. Confirmatory factory analysis found the 7-factor structure to be a relatively good fit to the data (Comparative Fit Index =.954; root mean square error of approximation =.057), and MPI factors predicted variance in criterion variables above and beyond the variance accounted for by broad personality traits (i.e., Big Five). Study limitations and directions for further validation research are specified.

  1. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  2. Procrustes Matching by Congruence Coefficients

    ERIC Educational Resources Information Center

    Korth, Bruce; Tucker, L. R.

    1976-01-01

    Matching by Procrustes methods involves the transformation of one matrix to match with another. A special least squares criterion, the congruence coefficient, has advantages as a criterion for some factor analytic interpretations. A Procrustes method maximizing the congruence coefficient is given. (Author/JKS)

  3. Using weighted power mean for equivalent square estimation.

    PubMed

    Zhou, Sumin; Wu, Qiuwen; Li, Xiaobo; Ma, Rongtao; Zheng, Dandan; Wang, Shuo; Zhang, Mutian; Li, Sicong; Lei, Yu; Fan, Qiyong; Hyun, Megan; Diener, Tyler; Enke, Charles

    2017-11-01

    Equivalent Square (ES) enables the calculation of many radiation quantities for rectangular treatment fields, based only on measurements from square fields. While it is widely applied in radiotherapy, its accuracy, especially for extremely elongated fields, still leaves room for improvement. In this study, we introduce a novel explicit ES formula based on Weighted Power Mean (WPM) function and compare its performance with the Sterling formula and Vadash/Bjärngard's formula. The proposed WPM formula is ESWPMa,b=waα+1-wbα1/α for a rectangular photon field with sides a and b. The formula performance was evaluated by three methods: standard deviation of model fitting residual error, maximum relative model prediction error, and model's Akaike Information Criterion (AIC). Testing datasets included the ES table from British Journal of Radiology (BJR), photon output factors (S cp ) from the Varian TrueBeam Representative Beam Data (Med Phys. 2012;39:6981-7018), and published S cp data for Varian TrueBeam Edge (J Appl Clin Med Phys. 2015;16:125-148). For the BJR dataset, the best-fit parameter value α = -1.25 achieved a 20% reduction in standard deviation in ES estimation residual error compared with the two established formulae. For the two Varian datasets, employing WPM reduced the maximum relative error from 3.5% (Sterling) or 2% (Vadash/Bjärngard) to 0.7% for open field sizes ranging from 3 cm to 40 cm, and the reduction was even more prominent for 1 cm field sizes on Edge (J Appl Clin Med Phys. 2015;16:125-148). The AIC value of the WPM formula was consistently lower than its counterparts from the traditional formulae on photon output factors, most prominent on very elongated small fields. The WPM formula outperformed the traditional formulae on three testing datasets. With increasing utilization of very elongated, small rectangular fields in modern radiotherapy, improved photon output factor estimation is expected by adopting the WPM formula in treatment planning and secondary MU check. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwon, Deukwoo; Little, Mark P.; Miller, Donald L.

    Purpose: To determine more accurate regression formulas for estimating peak skin dose (PSD) from reference air kerma (RAK) or kerma-area product (KAP). Methods: After grouping of the data from 21 procedures into 13 clinically similar groups, assessments were made of optimal clustering using the Bayesian information criterion to obtain the optimal linear regressions of (log-transformed) PSD vs RAK, PSD vs KAP, and PSD vs RAK and KAP. Results: Three clusters of clinical groups were optimal in regression of PSD vs RAK, seven clusters of clinical groups were optimal in regression of PSD vs KAP, and six clusters of clinical groupsmore » were optimal in regression of PSD vs RAK and KAP. Prediction of PSD using both RAK and KAP is significantly better than prediction of PSD with either RAK or KAP alone. The regression of PSD vs RAK provided better predictions of PSD than the regression of PSD vs KAP. The partial-pooling (clustered) method yields smaller mean squared errors compared with the complete-pooling method.Conclusion: PSD distributions for interventional radiology procedures are log-normal. Estimates of PSD derived from RAK and KAP jointly are most accurate, followed closely by estimates derived from RAK alone. Estimates of PSD derived from KAP alone are the least accurate. Using a stochastic search approach, it is possible to cluster together certain dissimilar types of procedures to minimize the total error sum of squares.« less

  5. De-noising of 3D multiple-coil MR images using modified LMMSE estimator.

    PubMed

    Yaghoobi, Nima; Hasanzadeh, Reza P R

    2018-06-20

    De-noising is a crucial topic in Magnetic Resonance Imaging (MRI) which focuses on less loss of Magnetic Resonance (MR) image information and details preservation during the noise suppression. Nowadays multiple-coil MRI system is preferred to single one due to its acceleration in the imaging process. Due to the fact that the model of noise in single-coil and multiple-coil MRI systems are different, the de-noising methods that mostly are adapted to single-coil MRI systems, do not work appropriately with multiple-coil one. The model of noise in single-coil MRI systems is Rician while in multiple-coil one (if no subsampling occurs in k-space or GRAPPA reconstruction process is being done in the coils), it obeys noncentral Chi (nc-χ). In this paper, a new filtering method based on the Linear Minimum Mean Square Error (LMMSE) estimator is proposed for multiple-coil MR Images ruined by nc-χ noise. In the presented method, to have an optimum similarity selection of voxels, the Bayesian Mean Square Error (BMSE) criterion is used and proved for nc-χ noise model and also a nonlocal voxel selection methodology is proposed for nc-χ distribution. The results illustrate robust and accurate performance compared to the related state-of-the-art methods, either on ideal nc-χ images or GRAPPA reconstructed ones. Copyright © 2018. Published by Elsevier Inc.

  6. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  7. A Comparison of Heuristic Procedures for Minimum within-Cluster Sums of Squares Partitioning

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Steinley, Douglas

    2007-01-01

    Perhaps the most common criterion for partitioning a data set is the minimization of the within-cluster sums of squared deviation from cluster centroids. Although optimal solution procedures for within-cluster sums of squares (WCSS) partitioning are computationally feasible for small data sets, heuristic procedures are required for most practical…

  8. A new validation technique for estimations of body segment inertia tensors: Principal axes of inertia do matter.

    PubMed

    Rossi, Marcel M; Alderson, Jacqueline; El-Sallam, Amar; Dowling, James; Reinbolt, Jeffrey; Donnelly, Cyril J

    2016-12-08

    The aims of this study were to: (i) establish a new criterion method to validate inertia tensor estimates by setting the experimental angular velocity data of an airborne objects as ground truth against simulations run with the estimated tensors, and (ii) test the sensitivity of the simulations to changes in the inertia tensor components. A rigid steel cylinder was covered with reflective kinematic markers and projected through a calibrated motion capture volume. Simulations of the airborne motion were run with two models, using inertia tensor estimated with geometric formula or the compound pendulum technique. The deviation angles between experimental (ground truth) and simulated angular velocity vectors and the root mean squared deviation angle were computed for every simulation. Monte Carlo analyses were performed to assess the sensitivity of simulations to changes in magnitude of principal moments of inertia within ±10% and to changes in orientation of principal axes of inertia within ±10° (of the geometric-based inertia tensor). Root mean squared deviation angles ranged between 2.9° and 4.3° for the inertia tensor estimated geometrically, and between 11.7° and 15.2° for the compound pendulum values. Errors up to 10% in magnitude of principal moments of inertia yielded root mean squared deviation angles ranging between 3.2° and 6.6°, and between 5.5° and 7.9° when lumped with errors of 10° in principal axes of inertia orientation. The proposed technique can effectively validate inertia tensors from novel estimation methods of body segment inertial parameter. Principal axes of inertia orientation should not be neglected when modelling human/animal mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A Generic Nonlinear Aerodynamic Model for Aircraft

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2014-01-01

    A generic model of the aerodynamic coefficients was developed using wind tunnel databases for eight different aircraft and multivariate orthogonal functions. For each database and each coefficient, models were determined using polynomials expanded about the state and control variables, and an othgonalization procedure. A predicted squared-error criterion was used to automatically select the model terms. Modeling terms picked in at least half of the analyses, which totalled 45 terms, were retained to form the generic nonlinear aerodynamic (GNA) model. Least squares was then used to estimate the model parameters and associated uncertainty that best fit the GNA model to each database. Nonlinear flight simulations were used to demonstrate that the GNA model produces accurate trim solutions, local behavior (modal frequencies and damping ratios), and global dynamic behavior (91% accurate state histories and 80% accurate aerodynamic coefficient histories) under large-amplitude excitation. This compact aerodynamics model can be used to decrease on-board memory storage requirements, quickly change conceptual aircraft models, provide smooth analytical functions for control and optimization applications, and facilitate real-time parametric system identification.

  10. The analytical design of spectral measurements for multispectral remote sensor systems

    NASA Technical Reports Server (NTRS)

    Wiersma, D. J.; Landgrebe, D. A. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. In order to choose a design which will be optimal for the largest class of remote sensing problems, a method was developed which attempted to represent the spectral response function from a scene as accurately as possible. The performance of the overall recognition system was studied relative to the accuracy of the spectral representation. The spectral representation was only one of a set of five interrelated parameter categories which also included the spatial representation parameter, the signal to noise ratio, ancillary data, and information classes. The spectral response functions observed from a stratum were modeled as a stochastic process with a Gaussian probability measure. The criterion for spectral representation was defined by the minimum expected mean-square error.

  11. Effect of ultrasound pre-treatment on the drying kinetics of brown seaweed Ascophyllum nodosum.

    PubMed

    Kadam, Shekhar U; Tiwari, Brijesh K; O'Donnell, Colm P

    2015-03-01

    The effect of ultrasound pre-treatment on the drying kinetics of brown seaweed Ascophyllum nodosum under hot-air convective drying was investigated. Pretreatments were carried out at ultrasound intensity levels ranging from 7.00 to 75.78 Wcm(-2) for 10 min using an ultrasonic probe system. It was observed that ultrasound pre-treatments reduced the drying time required. The shortest drying times were obtained from samples pre-treated at 75.78 Wcm(-2). The fit quality of 6 thin-layer drying models was also evaluated using the determination of coefficient (R(2)), root means square error (RMSE), AIC (Akaike information criterion) and BIC (Bayesian information criterion). Drying kinetics were modelled using the Newton, Henderson and Pabis, Page, Wang and Singh, Midilli et al. and Weibull models. The Newton, Wang and Singh, and Midilli et al. models showed the best fit to the experimental drying data. Color of ultrasound pretreated dried seaweed samples were lighter compared to control samples. It was concluded that ultrasound pretreatment can be effectively used to reduce the energy cost and drying time for drying of A. nodosum. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  13. A multiloop generalization of the circle criterion for stability margin analysis

    NASA Technical Reports Server (NTRS)

    Safonov, M. G.; Athans, M.

    1979-01-01

    In order to provide a theoretical tool suited for characterizing the stability margins of multiloop feedback systems, multiloop input-output stability results generalizing the circle stability criterion are considered. Generalized conic sectors with 'centers' and 'radii' determined by linear dynamical operators are employed to specify the stability margins as a frequency dependent convex set of modeling errors (including nonlinearities, gain variations and phase variations) which the system must be able to tolerate in each feedback loop without instability. The resulting stability criterion gives sufficient conditions for closed loop stability in the presence of frequency dependent modeling errors, even when the modeling errors occur simultaneously in all loops. The stability conditions yield an easily interpreted scalar measure of the amount by which a multiloop system exceeds, or falls short of, its stability margin specifications.

  14. Effect of pH Test-Strip Characteristics on Accuracy of Readings.

    PubMed

    Metheny, Norma A; Gunn, Emily M; Rubbelke, Cynthia S; Quillen, Terrilynn Fox; Ezekiel, Uthayashanker R; Meert, Kathleen L

    2017-06-01

    Little is known about characteristics of colorimetric pH test strips that are most likely to be associated with accurate interpretations in clinical situations. To compare the accuracy of 4 pH test strips with varying characteristics (ie, multiple vs single colorimetric squares per calibration, and differing calibration units [1.0 vs 0.5]). A convenience sample of 100 upper-level nursing students with normal color vision was recruited to evaluate the accuracy of the test strips. Six buffer solutions (pH range, 3.0 to 6.0) were used during the testing procedure. Each of the 100 participants performed 20 pH tests in random order, providing a total of 2000 readings. The sensitivity and specificity of each test strip was computed. In addition, the degree to which the test strips under- or overestimated the pH values was analyzed using descriptive statistics. Our criterion for correct readings was an exact match with the pH buffer solution being evaluated. Although none of the test strips evaluated in our study was 100% accurate at all of the measured pH values, those with multiple squares per pH calibration were clearly superior overall to those with a single test square. Test strips with multiple squares per calibration were associated with greater overall accuracy than test strips with a single square per calibration. However, because variable degrees of error were observed in all of the test strips, use of a pH meter is recommended when precise readings are crucial. ©2017 American Association of Critical-Care Nurses.

  15. A technique for the determination of center of gravity and rolling resistance for tilt-seat wheelchairs.

    PubMed

    Lemaire, E D; Lamontagne, M; Barclay, H W; John, T; Martel, G

    1991-01-01

    A balance platform setup was defined for use in the determination of the center of gravity in the sagittal plane for a wheelchair and patient. Using the center of gravity information, measurements from the wheelchair and patient (weight, tire coefficients of friction), and various assumptions (constant speed, level-concrete surface, patient-wheelchair system is a rigid body), a method for estimating the rolling resistance for a wheelchair was outlined. The center of gravity and rolling resistance techniques were validated against criterion values (center of gravity error = 1 percent, rolling resistance root mean square error = 0.33 N, rolling resistance Pearson correlation coefficient = 0.995). Consistent results were also obtained from a test dummy and five subjects. Once the center of gravity is known, it is possible to evaluate the stability of a wheelchair (in terms of tipping over) and the interaction between the level of stability and rolling resistance. These quantitative measures are expected to be of use in the setup of wheelchairs with a variable seat angle and variable wheelbase length or when making comparisons between different wheelchairs.

  16. Global 21 cm Signal Extraction from Foreground and Instrumental Effects. I. Pattern Recognition Framework for Separation Using Training Sets

    NASA Astrophysics Data System (ADS)

    Tauscher, Keith; Rapetti, David; Burns, Jack O.; Switzer, Eric

    2018-02-01

    The sky-averaged (global) highly redshifted 21 cm spectrum from neutral hydrogen is expected to appear in the VHF range of ∼20–200 MHz and its spectral shape and strength are determined by the heating properties of the first stars and black holes, by the nature and duration of reionization, and by the presence or absence of exotic physics. Measurements of the global signal would therefore provide us with a wealth of astrophysical and cosmological knowledge. However, the signal has not yet been detected because it must be seen through strong foregrounds weighted by a large beam, instrumental calibration errors, and ionospheric, ground, and radio-frequency-interference effects, which we collectively refer to as “systematics.” Here, we present a signal extraction method for global signal experiments which uses Singular Value Decomposition of “training sets” to produce systematics basis functions specifically suited to each observation. Instead of requiring precise absolute knowledge of the systematics, our method effectively requires precise knowledge of how the systematics can vary. After calculating eigenmodes for the signal and systematics, we perform a weighted least square fit of the corresponding coefficients and select the number of modes to include by minimizing an information criterion. We compare the performance of the signal extraction when minimizing various information criteria and find that minimizing the Deviance Information Criterion most consistently yields unbiased fits. The methods used here are built into our widely applicable, publicly available Python package, pylinex, which analytically calculates constraints on signals and systematics from given data, errors, and training sets.

  17. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  18. A comparison of the fractal and JPEG algorithms

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Shahshahani, M.

    1991-01-01

    A proprietary fractal image compression algorithm and the Joint Photographic Experts Group (JPEG) industry standard algorithm for image compression are compared. In every case, the JPEG algorithm was superior to the fractal method at a given compression ratio according to a root mean square criterion and a peak signal to noise criterion.

  19. Low target prevalence is a stubborn source of errors in visual search tasks

    PubMed Central

    Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour

    2009-01-01

    In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575

  20. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  1. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  2. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  3. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  4. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    NASA Technical Reports Server (NTRS)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.

  5. Use of scan overlap redundancy to enhance multispectral aircraft scanner data

    NASA Technical Reports Server (NTRS)

    Lindenlaub, J. C.; Keat, J.

    1973-01-01

    Two criteria were suggested for optimizing the resolution error versus signal-to-noise-ratio tradeoff. The first criterion uses equal weighting coefficients and chooses n, the number of lines averaged, so as to make the average resolution error equal to the noise error. The second criterion adjusts both the number and relative sizes of the weighting coefficients so as to minimize the total error (resolution error plus noise error). The optimum set of coefficients depends upon the geometry of the resolution element, the number of redundant scan lines, the scan line increment, and the original signal-to-noise ratio of the channel. Programs were developed to find the optimum number and relative weights of the averaging coefficients. A working definition of signal-to-noise ratio was given and used to try line averaging on a typical set of data. Line averaging was evaluated only with respect to its effect on classification accuracy.

  6. A fast-initializing digital equalizer with on-line tracking for data communications

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Barksdale, W. J.

    1974-01-01

    A theory is developed for a digital equalizer for use in reducing intersymbol interference (ISI) on high speed data communications channels. The equalizer is initialized with a single isolated transmitter pulse, provided the signal-to-noise ratio (SNR) is not unusually low, then switches to a decision directed, on-line mode of operation that allows tracking of channel variations. Conditions for optimal tap-gain settings are obtained first for a transversal equalizer structure by using a mean squared error (MSE) criterion, a first order gradient algorithm to determine the adjustable equalizer tap-gains, and a sequence of isolated initializing pulses. Since the rate of tap-gain convergence depends on the eigenvalues of a channel output correlation matrix, convergence can be improved by making a linear transformation on to obtain a new correlation matrix.

  7. Kinetics of Methane Production from Swine Manure and Buffalo Manure.

    PubMed

    Sun, Chen; Cao, Weixing; Liu, Ronghou

    2015-10-01

    The degradation kinetics of swine and buffalo manure for methane production was investigated. Six kinetic models were employed to describe the corresponding experimental data. These models were evaluated by two statistical measurements, which were root mean square prediction error (RMSPE) and Akaike's information criterion (AIC). The results showed that the logistic and Fitzhugh models could predict the experimental data very well for the digestion of swine and buffalo manure, respectively. The predicted methane yield potential for swine and buffalo manure was 487.9 and 340.4 mL CH4/g volatile solid (VS), respectively, which was close to experimental values, when the digestion temperature was 36 ± 1 °C in the biochemical methane potential assays. Besides, the rate constant revealed that swine manure had a much faster methane production rate than buffalo manure.

  8. Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons

    DTIC Science & Technology

    1989-03-25

    error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given

  9. Validation of the Chinese Version of the Quality of Nursing Work Life Scale

    PubMed Central

    Fu, Xia; Xu, Jiajia; Song, Li; Li, Hua; Wang, Jing; Wu, Xiaohua; Hu, Yani; Wei, Lijun; Gao, Lingling; Wang, Qiyi; Lin, Zhanyi; Huang, Huigen

    2015-01-01

    Quality of Nursing Work Life (QNWL) serves as a predictor of a nurse’s intent to leave and hospital nurse turnover. However, QNWL measurement tools that have been validated for use in China are lacking. The present study evaluated the construct validity of the QNWL scale in China. A cross-sectional study was conducted conveniently from June 2012 to January 2013 at five hospitals in Guangzhou, which employ 1938 nurses. The participants were asked to complete the QNWL scale and the World Health Organization Quality of Life abbreviated version (WHOQOL-BREF). A total of 1922 nurses provided the final data used for analyses. Sixty-five nurses from the first investigated division were re-measured two weeks later to assess the test-retest reliability of the scale. The internal consistency reliability of the QNWL scale was assessed using Cronbach’s α. Test-retest reliability was assessed using the intra-class correlation coefficient (ICC). Criterion-relation validity was assessed using the correlation of the total scores of the QNWL and the WHOQOL-BREF. Construct validity was assessed with the following indices: χ2 statistics and degrees of freedom; relative mean square error of approximation (RMSEA); the Akaike information criterion (AIC); the consistent Akaike information criterion (CAIC); the goodness-of-fit index (GFI); the adjusted goodness of fit index; and the comparative fit index (CFI). The findings demonstrated high internal consistency (Cronbach’s α = 0.912) and test-retest reliability (interclass correlation coefficient = 0.74) for the QNWL scale. The chi-square test (χ2 = 13879.60, df [degree of freedom] = 813 P = 0.0001) was significant. The RMSEA value was 0.091, and AIC = 1806.00, CAIC = 7730.69, CFI = 0.93, and GFI = 0.74. The correlation coefficient between the QNWL total scores and the WHOQOL-BREF total scores was 0.605 (p<0.01). The QNWL scale was reliable and valid in Chinese-speaking nurses and could be used as a clinical and research instrument for measuring work-related factors among nurses in China. PMID:25950838

  10. Selecting the optimum number of partial least squares components for the calibration of attenuated total reflectance-mid-infrared spectra of undesigned kerosene samples.

    PubMed

    Gómez-Carracedo, M P; Andrade, J M; Rutledge, D N; Faber, N M

    2007-03-07

    Selecting the correct dimensionality is critical for obtaining partial least squares (PLS) regression models with good predictive ability. Although calibration and validation sets are best established using experimental designs, industrial laboratories cannot afford such an approach. Typically, samples are collected in an (formally) undesigned way, spread over time and their measurements are included in routine measurement processes. This makes it hard to evaluate PLS model dimensionality. In this paper, classical criteria (leave-one-out cross-validation and adjusted Wold's criterion) are compared to recently proposed alternatives (smoothed PLS-PoLiSh and a randomization test) to seek out the optimum dimensionality of PLS models. Kerosene (jet fuel) samples were measured by attenuated total reflectance-mid-IR spectrometry and their spectra where used to predict eight important properties determined using reference methods that are time-consuming and prone to analytical errors. The alternative methods were shown to give reliable dimensionality predictions when compared to external validation. By contrast, the simpler methods seemed to be largely affected by the largest changes in the modeling capabilities of the first components.

  11. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  12. [The psychometric properties of the Turkish version of Myocardial Infarction Dimensional Assessment Scale (MIDAS)].

    PubMed

    Yılmaz, Emel; Eser, Erhan; Şekuri, Cevad; Kültürsay, Hakan

    2011-08-01

    The purpose of this study was to describe the psychometric properties of the Myocardial Infarction Dimensional Assessment Scale (MIDAS). This is a methodological cultural adaptation study. The MIDAS consists of 35-items covering seven domains: physical activity, insecurity, emotional reaction, dependency, diet, concerns over medication, and side effects which are rated on a five-point Likert scale from 1: never to 5:always. The highest score of MIDAS is 100.Quality of life (QOL) decreases as the score of scale increases. Overall 185 myocardial infarction (MI) patients were enrolled in this study. Cronbach alpha was used for the reliability analysis. The criterion validity, structural validity, and sensitivity analysis approach was used for validity analysis. New York Heart Association (NYHA) and the Canadian Cardiovascular Society Functional Classifications (CCSFC) for testing the criterion validity; SF-36 for construct validity testing of the Turkish version of the MIDAS were used. The range of Cronbach alpha values is 0.79-0.90 for seven domains of the scale. No problematic items were observed for the entire scale. Medication related domains of the MIDAS showed considerable floor effects (35.7%-22.7%). Confirmatory Factor analysis indicators [Comparative Fit Index (CFI) =0.95 and Root Mean Square Error of Approximation (RMSEA) =0.075] supported the construct validity of MIDAS. Convergent validity of the MIDAS was confirmed with correlation of SF-36 scale where appropriate. Criterion validity results was also satisfactory by comparing different stages of the NYHA and the CCSFC (p<0.05). Overall results revealed that Turkish version of the MIDAS is a reliable and valid instrument.

  13. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  14. A forecasting model for dengue incidence in the District of Gampaha, Sri Lanka.

    PubMed

    Withanage, Gayan P; Viswakula, Sameera D; Nilmini Silva Gunawardena, Y I; Hapugoda, Menaka D

    2018-04-24

    Dengue is one of the major health problems in Sri Lanka causing an enormous social and economic burden to the country. An accurate early warning system can enhance the efficiency of preventive measures. The aim of the study was to develop and validate a simple accurate forecasting model for the District of Gampaha, Sri Lanka. Three time-series regression models were developed using monthly rainfall, rainy days, temperature, humidity, wind speed and retrospective dengue incidences over the period January 2012 to November 2015 for the District of Gampaha, Sri Lanka. Various lag times were analyzed to identify optimum forecasting periods including interactions of multiple lags. The models were validated using epidemiological data from December 2015 to November 2017. Prepared models were compared based on Akaike's information criterion, Bayesian information criterion and residual analysis. The selected model forecasted correctly with mean absolute errors of 0.07 and 0.22, and root mean squared errors of 0.09 and 0.28, for training and validation periods, respectively. There were no dengue epidemics observed in the district during the training period and nine outbreaks occurred during the forecasting period. The proposed model captured five outbreaks and correctly rejected 14 within the testing period of 24 months. The Pierce skill score of the model was 0.49, with a receiver operating characteristic of 86% and 92% sensitivity. The developed weather based forecasting model allows warnings of impending dengue outbreaks and epidemics in advance of one month with high accuracy. Depending upon climatic factors, the previous month's dengue cases had a significant effect on the dengue incidences of the current month. The simple, precise and understandable forecasting model developed could be used to manage limited public health resources effectively for patient management, vector surveillance and intervention programmes in the district.

  15. Open-ocean boundary conditions from interior data: Local and remote forcing of Massachusetts Bay

    USGS Publications Warehouse

    Bogden, P.S.; Malanotte-Rizzoli, P.; Signell, R.

    1996-01-01

    Massachusetts and Cape Cod Bays form a semienclosed coastal basin that opens onto the much larger Gulf of Maine. Subtidal circulation in the bay is driven by local winds and remotely driven flows from the gulf. The local-wind forced flow is estimated with a regional shallow water model driven by wind measurements. The model uses a gravity wave radiation condition along the open-ocean boundary. Results compare reasonably well with observed currents near the coast. In some offshore regions however, modeled flows are an order of magnitude less energetic than the data. Strong flows are observed even during periods of weak local wind forcing. Poor model-data comparisons are attributable, at least in part, to open-ocean boundary conditions that neglect the effects of remote forcing. Velocity measurements from within Massachusetts Bay are used to estimate the remotely forced component of the flow. The data are combined with shallow water dynamics in an inverse-model formulation that follows the theory of Bennett and McIntosh [1982], who considered tides. We extend their analysis to consider the subtidal response to transient forcing. The inverse model adjusts the a priori open-ocean boundary condition, thereby minimizing a combined measure of model-data misfit and boundary condition adjustment. A "consistency criterion" determines the optimal trade-off between the two. The criterion is based on a measure of plausibility for the inverse solution. The "consistent" inverse solution reproduces 56% of the average squared variation in the data. The local-wind-driven flow alone accounts for half of the model skill. The other half is attributable to remotely forced flows from the Gulf of Maine. The unexplained 44% comes from measurement errors and model errors that are not accounted for in the analysis. 

  16. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  17. Disorder-induced amorphization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, N.Q.; Okamoto, P.R.; Li, Mo

    1997-03-01

    Many crystalline materials undergo a crystalline-to-amorphous (c-a) phase transition when subjected to energetic particle irradiation at low temperatures. By focusing on the mean-square static atomic displacement as a generic measure of chemical and topological disorder, we are led quite naturally to a generalized version of the Lindemann melting criterion as a conceptual framework for a unified thermodynamic approach to solid-state amorphizing transformations. In its simplest form, the generalized Lindemann criterion assumes that the sum of the static and dynamic mean-square atomic displacements is constant along the polymorphous melting curve so that c-a transformations can be understood simply as melting ofmore » a critically-disordered crystal at temperatures below the glass transition temperature where the supercooled liquid can persist indefinitely in a configurationally-frozen state. Evidence in support of the generalized Lindemann melting criterion for amorphization is provided by a large variety of experimental observations and by molecular dynamics simulations of heat-induced melting and of defect-induced amorphization of intermetallic compounds.« less

  18. A Method for Estimating Zero-Flow Pressure and Intracranial Pressure

    PubMed Central

    Caren, Marzban; Paul, Raymond Illian; David, Morison; Anne, Moore; Michel, Kliot; Marek, Czosnyka; Pierre, Mourad

    2012-01-01

    Background It has been hypothesized that critical closing pressure of cerebral circulation, or zero-flow pressure (ZFP), can estimate intracranial pressure (ICP). One ZFP estimation method employs extrapolation of arterial blood pressure versus blood-flow velocity. The aim of this study is to improve ICP predictions. Methods Two revisions are considered: 1) The linear model employed for extrapolation is extended to a nonlinear equation, and 2) the parameters of the model are estimated by an alternative criterion (not least-squares). The method is applied to data on transcranial Doppler measurements of blood-flow velocity, arterial blood pressure, and ICP, from 104 patients suffering from closed traumatic brain injury, sampled across the United States and England. Results The revisions lead to qualitative (e.g., precluding negative ICP) and quantitative improvements in ICP prediction. In going from the original to the revised method, the ±2 standard deviation of error is reduced from 33 to 24 mm Hg; the root-mean-squared error (RMSE) is reduced from 11 to 8.2 mm Hg. The distribution of RMSE is tighter as well; for the revised method the 25th and 75th percentiles are 4.1 and 13.7 mm Hg, respectively, as compared to 5.1 and 18.8 mm Hg for the original method. Conclusions Proposed alterations to a procedure for estimating ZFP lead to more accurate and more precise estimates of ICP, thereby offering improved means of estimating it noninvasively. The quality of the estimates is inadequate for many applications, but further work is proposed which may lead to clinically useful results. PMID:22824923

  19. Protein backbone and sidechain torsion angles predicted from NMR chemical shifts using artificial neural networks

    PubMed Central

    Shen, Yang; Bax, Ad

    2013-01-01

    A new program, TALOS-N, is introduced for predicting protein backbone torsion angles from NMR chemical shifts. The program relies far more extensively on the use of trained artificial neural networks than its predecessor, TALOS+. Validation on an independent set of proteins indicates that backbone torsion angles can be predicted for a larger, ≥ 90% fraction of the residues, with an error rate smaller than ca 3.5%, using an acceptance criterion that is nearly two-fold tighter than that used previously, and a root mean square difference between predicted and crystallographically observed (φ,ψ) torsion angles of ca 12°. TALOS-N also reports sidechain χ1 rotameric states for about 50% of the residues, and a consistency with reference structures of 89%. The program includes a neural network trained to identify secondary structure from residue sequence and chemical shifts. PMID:23728592

  20. Active impulsive noise control using maximum correntropy with adaptive kernel size

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Zhao, Haiquan

    2017-03-01

    The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.

  1. A VLSI chip set for real time vector quantization of image sequences

    NASA Technical Reports Server (NTRS)

    Baker, Richard L.

    1989-01-01

    The architecture and implementation of a VLSI chip set that vector quantizes (VQ) image sequences in real time is described. The chip set forms a programmable Single-Instruction, Multiple-Data (SIMD) machine which can implement various vector quantization encoding structures. Its VQ codebook may contain unlimited number of codevectors, N, having dimension up to K = 64. Under a weighted least squared error criterion, the engine locates at video rates the best code vector in full-searched or large tree searched VQ codebooks. The ability to manipulate tree structured codebooks, coupled with parallelism and pipelining, permits searches in as short as O (log N) cycles. A full codebook search results in O(N) performance, compared to O(KN) for a Single-Instruction, Single-Data (SISD) machine. With this VLSI chip set, an entire video code can be built on a single board that permits realtime experimentation with very large codebooks.

  2. Computer modeling of multiple-channel input signals and intermodulation losses caused by nonlinear traveling wave tube amplifiers

    NASA Technical Reports Server (NTRS)

    Stankiewicz, N.

    1982-01-01

    The multiple channel input signal to a soft limiter amplifier as a traveling wave tube is represented as a finite, linear sum of Gaussian functions in the frequency domain. Linear regression is used to fit the channel shapes to a least squares residual error. Distortions in output signal, namely intermodulation products, are produced by the nonlinear gain characteristic of the amplifier and constitute the principal noise analyzed in this study. The signal to noise ratios are calculated for various input powers from saturation to 10 dB below saturation for two specific distributions of channels. A criterion for the truncation of the series expansion of the nonlinear transfer characteristic is given. It is found that he signal to noise ratios are very sensitive to the coefficients used in this expansion. Improper or incorrect truncation of the series leads to ambiguous results in the signal to noise ratios.

  3. Robust Inversion and Data Compression in Control Allocation

    NASA Technical Reports Server (NTRS)

    Hodel, A. Scottedward

    2000-01-01

    We present an off-line computational method for control allocation design. The control allocation function delta = F(z)tau = delta (sub 0) (z) mapping commanded body-frame torques to actuator commands is implicitly specified by trim condition delta (sub 0) (z) and by a robust pseudo-inverse problem double vertical line I - G(z) F(z) double vertical line less than epsilon (z) where G(z) is a system Jacobian evaluated at operating point z, z circumflex is an estimate of z, and epsilon (z) less than 1 is a specified error tolerance. The allocation function F(z) = sigma (sub i) psi (z) F (sub i) is computed using a heuristic technique for selecting wavelet basis functions psi and a constrained least-squares criterion for selecting the allocation matrices F (sub i). The method is applied to entry trajectory control allocation for a reusable launch vehicle (X-33).

  4. Response Surface Analysis of Experiments with Random Blocks

    DTIC Science & Technology

    1988-09-01

    partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF

  5. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  6. Model Estimation Using Ridge Regression with the Variance Normalization Criterion. Interim Report No. 2. The Education and Inequality in Canada Project.

    ERIC Educational Resources Information Center

    Lee, Wan-Fung; Bulcock, Jeffrey Wilson

    The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…

  7. SU-G-BRC-17: Using Generalized Mean for Equivalent Square Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, S; Fan, Q; Lei, Y

    Purpose: Equivalent Square (ES) is a widely used concept in radiotherapy. It enables us to determine many important quantities for a rectangular treatment field, without measurement, based on the corresponding values from its ES field. In this study, we propose a Generalized Mean (GM) type ES formula and compare it with other established formulae using benchmark datasets. Methods: Our GM approach is expressed as ES=(w•fx^α+(1-w)•fy^α)^(1/α), where fx, fy, α, and w represent field sizes, power index, and a weighting factor, respectively. When α=−1 it reduces to well-known Sterling type ES formulae. In our study, α and w are determined throughmore » least-square-fitting. Akaike Information Criterion (AIC) was used to benchmark the performance of each formula. BJR (Supplement 17) ES field table for X-ray PDDs and open field output factor tables in Varian TrueBeam representative dataset were used for validation. Results: Switching from α=−1 to α=−1.25, a 20% reduction in standard deviation of residual error in ES estimation was achieved for the BJR dataset. The maximum relative residual error was reduced from ∼3% (in Sterling formula) or ∼2% (in Vadash/Bjarngard formula) down to ∼1% in GM formula for open fields of all energies and at rectangular field sizes from 3cm to 40cm in the Varian dataset. The improvement of the GM over the Sterling type ES formulae is particularly noticeable for very elongated rectangular fields with short width. AIC analysis confirmed the superior performance of the GM formula after taking into account the expanded parameter space. Conclusion: The GM significantly outperforms Sterling type formulae at slightly increased computational cost. The GM calculation may nullify the requirement of data measurement for many rectangular fields and hence shorten the Linac commissioning process. Improved dose calculation accuracy is also expected by adopting the GM formula into treatment planning and secondary MU check systems.« less

  8. Validity and reliability of the 1/4 mile run-walk test in physically active children and adolescents.

    PubMed

    Ruiz, Jonatan R; Ortega, Francisco B; Castro-Piñero, Jose

    2014-11-30

    We investigated the criterion-related validity and the reliability of the 1/4 mile run-walk test (MRWT) in children and adolescents. A total of 86 children (n=42 girls) completed a maximal graded treadmill test using a gas analyzer and the 1/4MRW test. We investigated the test-retest reliability of the 1/4MRWT in a different group of children and adolescents (n=995, n=418 girls). The 1/4MRWT time, sex, and BMI significantly contributed to predict measured VO2peak (R2= 0.32). There was no systematic bias in the cross-validation group (P>0.1). The root mean sum of squared errors (RMSE) and the percentage error were 6.9 ml/kg/min and 17.7%, respectively, and the accurate prediction (i.e. the percentage of estimations within ±4.5 ml/kg/min of VO2peak) was 48.8%. The reliability analysis showed that the mean inter-trial difference ranged from 0.6 seconds in children aged 6-11 years to 1.3 seconds in adolescents aged 12-17 years (all P. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  9. A hybrid intelligent controller for a twin rotor MIMO system and its hardware implementation.

    PubMed

    Juang, Jih-Gau; Liu, Wen-Kai; Lin, Ren-Wei

    2011-10-01

    This paper presents a fuzzy PID control scheme with a real-valued genetic algorithm (RGA) to a setpoint control problem. The objective of this paper is to control a twin rotor MIMO system (TRMS) to move quickly and accurately to the desired attitudes, both the pitch angle and the azimuth angle in a cross-coupled condition. A fuzzy compensator is applied to the PID controller. The proposed control structure includes four PID controllers with independent inputs in 2-DOF. In order to reduce total error and control energy, all parameters of the controller are obtained by a RGA with the system performance index as a fitness function. The system performance index utilized the integral of time multiplied by the square error criterion (ITSE) to build a suitable fitness function in the RGA. A new method for RGA to solve more than 10 parameters in the control scheme is investigated. For real-time control, Xilinx Spartan II SP200 FPGA (Field Programmable Gate Array) is employed to construct a hardware-in-the-loop system through writing VHDL on this FPGA. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Performance Comparison between CDTD and STTD for DS-CDMA/MMSE-FDE with Frequency-Domain ICI Cancellation

    NASA Astrophysics Data System (ADS)

    Takeda, Kazuaki; Kojima, Yohei; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. However, the residual inter-chip interference (ICI) is produced after MMSE-FDE and this degrades the BER performance. Recently, we showed that frequency-domain ICI cancellation can bring the BER performance close to the theoretical lower bound. To further improve the BER performance, transmit antenna diversity technique is effective. Cyclic delay transmit diversity (CDTD) can increase the number of equivalent paths and hence achieve a large frequency diversity gain. Space-time transmit diversity (STTD) can obtain antenna diversity gain due to the space-time coding and achieve a better BER performance than CDTD. Objective of this paper is to show that the BER performance degradation of CDTD is mainly due to the residual ICI and that the introduction of ICI cancellation gives almost the same BER performance as STTD. This study provides a very important result that CDTD has a great advantage of providing a higher throughput than STTD. This is confirmed by computer simulation. The computer simulation results show that CDTD can achieve higher throughput than STTD when ICI cancellation is introduced.

  11. Effect of temperature on microbial growth rate-mathematical analysis: the Arrhenius and Eyring-Polanyi connections.

    PubMed

    Huang, Lihan; Hwang, Andy; Phillips, John

    2011-10-01

    The objective of this work is to develop a mathematical model for evaluating the effect of temperature on the rate of microbial growth. The new mathematical model is derived by combination and modification of the Arrhenius equation and the Eyring-Polanyi transition theory. The new model, suitable for both suboptimal and the entire growth temperature ranges, was validated using a collection of 23 selected temperature-growth rate curves belonging to 5 groups of microorganisms, including Pseudomonas spp., Listeria monocytogenes, Salmonella spp., Clostridium perfringens, and Escherichia coli, from the published literature. The curve fitting is accomplished by nonlinear regression using the Levenberg-Marquardt algorithm. The resulting estimated growth rate (μ) values are highly correlated to the data collected from the literature (R(2) = 0.985, slope = 1.0, intercept = 0.0). The bias factor (B(f) ) of the new model is very close to 1.0, while the accuracy factor (A(f) ) ranges from 1.0 to 1.22 for most data sets. The new model is compared favorably with the Ratkowsky square root model and the Eyring equation. Even with more parameters, the Akaike information criterion, Bayesian information criterion, and mean square errors of the new model are not statistically different from the square root model and the Eyring equation, suggesting that the model can be used to describe the inherent relationship between temperature and microbial growth rates. The results of this work show that the new growth rate model is suitable for describing the effect of temperature on microbial growth rate. Practical Application:  Temperature is one of the most significant factors affecting the growth of microorganisms in foods. This study attempts to develop and validate a mathematical model to describe the temperature dependence of microbial growth rate. The findings show that the new model is accurate and can be used to describe the effect of temperature on microbial growth rate in foods. Journal of Food Science © 2011 Institute of Food Technologists® No claim to original US government works.

  12. Kernel Wiener filter and its application to pattern recognition.

    PubMed

    Yoshino, Hirokazu; Dong, Chen; Washizawa, Yoshikazu; Yamashita, Yukihiko

    2010-11-01

    The Wiener filter (WF) is widely used for inverse problems. From an observed signal, it provides the best estimated signal with respect to the squared error averaged over the original and the observed signals among linear operators. The kernel WF (KWF), extended directly from WF, has a problem that an additive noise has to be handled by samples. Since the computational complexity of kernel methods depends on the number of samples, a huge computational cost is necessary for the case. By using the first-order approximation of kernel functions, we realize KWF that can handle such a noise not by samples but as a random variable. We also propose the error estimation method for kernel filters by using the approximations. In order to show the advantages of the proposed methods, we conducted the experiments to denoise images and estimate errors. We also apply KWF to classification since KWF can provide an approximated result of the maximum a posteriori classifier that provides the best recognition accuracy. The noise term in the criterion can be used for the classification in the presence of noise or a new regularization to suppress changes in the input space, whereas the ordinary regularization for the kernel method suppresses changes in the feature space. In order to show the advantages of the proposed methods, we conducted experiments of binary and multiclass classifications and classification in the presence of noise.

  13. Validity of an ultra-wideband local positioning system to measure locomotion in indoor sports.

    PubMed

    Serpiello, F R; Hopkins, W G; Barnes, S; Tavrou, J; Duthie, G M; Aughey, R J; Ball, K

    2018-08-01

    The validity of an Ultra-wideband (UWB) positioning system was investigated during linear and change-of-direction (COD) running drills. Six recreationally-active men performed ten repetitions of four activities (walking, jogging, maximal acceleration, and 45º COD) on an indoor court. Activities were repeated twice, in the centre of the court and on the side. Participants wore a receiver tag (Clearsky T6, Catapult Sports) and two reflective markers placed on the tag to allow for comparisons with the criterion system (Vicon). Distance, mean and peak velocity, acceleration, and deceleration were assessed. Validity was assessed via percentage least-square means difference (Clearsky-Vicon) with 90% confidence interval and magnitude-based inference; typical error was expressed as within-subject standard deviation. The mean differences for distance, mean/peak speed, and mean/peak accelerations in the linear drills were in the range of 0.2-12%, with typical errors between 1.2 and 9.3%. Mean and peak deceleration had larger differences and errors between systems. In the COD drill, moderate-to-large differences were detected for the activity performed in the centre of the court, increasing to large/very large on the side. When filtered and smoothed following a similar process, the UWB-based positioning system had acceptable validity, compared to Vicon, to assess movements representative of indoor sports.

  14. SU-F-J-25: Position Monitoring for Intracranial SRS Using BrainLAB ExacTrac Snap Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, S; McCaw, T; Huq, M

    2016-06-15

    Purpose: To determine the accuracy of position monitoring with BrainLAB ExacTrac snap verification following couch rotations during intracranial SRS. Methods: A CT scan of an anthropomorphic head phantom was acquired using 1.25mm slices. The isocenter was positioned near the centroid of the frontal lobe. The head phantom was initially aligned on the treatment couch using cone-beam CT, then repositioned using ExacTrac x-ray verification with residual errors less than 0.2mm and 0.2°. Snap verification was performed over the full range of couch angles in 15° increments with known positioning offsets of 0–3mm applied to the phantom along each axis. At eachmore » couch angle, the smallest tolerance was determined for which no positioning deviation was detected. Results: For couch angles 30°–60° from the center position, where the longitudinal axis of the phantom is approximately aligned with the beam axis of one x-ray tube, snap verification consistently detected positioning errors exceeding the maximum 8mm tolerance. Defining localization error as the difference between the known offset and the minimum tolerance for which no deviation was detected, the RMS error is mostly less than 1mm outside of couch angles 30°–60° from the central couch position. Given separate measurements of patient position from the two imagers, whether to proceed with treatment can be determined by the criterion of a reading within tolerance from just one (OR criterion) or both (AND criterion) imagers. Using a positioning tolerance of 1.5mm, snap verification has sensitivity and specificity of 94% and 75%, respectively, with the AND criterion, and 67% and 93%, respectively, with the OR criterion. If readings exceeding maximum tolerance are excluded, the sensitivity and specificity are 88% and 86%, respectively, with the AND criterion. Conclusion: With a positioning tolerance of 1.5mm, ExacTrac snap verification can be used during intracranial SRS with sensitivity and specificity between 85% and 90%.« less

  15. SU-E-T-20: A Correlation Study of 2D and 3D Gamma Passing Rates for Prostate IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, D; Sun Yat-sen University Cancer Center, Guangzhou, Guangdong; Wang, B

    2015-06-15

    Purpose: To investigate the correlation between the two-dimensional gamma passing rate (2D %GP) and three-dimensional gamma passing rate (3D %GP) in prostate IMRT quality assurance. Methods: Eleven prostate IMRT plans were randomly selected from the clinical database and were used to obtain dose distributions in the phantom and patient. Three types of delivery errors (MLC bank sag errors, central MLC errors and monitor unit errors) were intentionally introduced to modify the clinical plans through an in-house Matlab program. This resulted in 187 modified plans. The 2D %GP and 3D %GP were analyzed using different dose-difference and distance-toagreement (1%-1mm, 2%-2mm andmore » 3%-3mm) and 20% dose threshold. The 2D %GP and 3D %GP were then compared not only for the whole region, but also for the PTVs and critical structures using the statistical Pearson’s correlation coefficient (γ). Results: For different delivery errors, the average comparison of 2D %GP and 3D %GP showed different conclusions. The statistical correlation coefficients between 2D %GP and 3D %GP for the whole dose distribution showed that except for 3%/3mm criterion, 2D %GP and 3D %GP of 1%/1mm criterion and 2%/2mm criterion had strong correlations (Pearson’s γ value >0.8). Compared with the whole region, the correlations of 2D %GP and 3D %GP for PTV were better (the γ value for 1%/1mm, 2%/2mm and 3%/3mm criterion was 0.959, 0.931 and 0.855, respectively). However for the rectum, there was no correlation between 2D %GP and 3D %GP. Conclusion: For prostate IMRT, the correlation between 2D %GP and 3D %GP for the PTV is better than that for normal structures. The lower dose-difference and DTA criterion shows less difference between 2D %GP and 3D %GP. Other factors such as the dosimeter characteristics and TPS algorithm bias may also influence the correlation between 2D %GP and 3D %GP.« less

  16. Error propagation of partial least squares for parameters optimization in NIR modeling.

    PubMed

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  17. Error propagation of partial least squares for parameters optimization in NIR modeling

    NASA Astrophysics Data System (ADS)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  18. SU-E-T-325: The New Evaluation Method of the VMAT Plan Delivery Using Varian DynaLog Files and Modulation Complexity Score (MCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tateoka, K; Graduate School of Medicine, Sapporo Medical University, Sapporo, JP; Fujimomo, K

    2014-06-01

    Purpose: The aim of the study is to evaluate the use of Varian DynaLog files to verify VMAT plans delivery and modulation complexity score (MCS) of VMAT. Methods: Delivery accuracy of machine performance was quantified by multileaf collimator (MLC) position errors, gantry angle errors and fluence delivery accuracy for volumetric modulated arc therapy (VMAT). The relationship between machine performance and plan complexity were also investigated using the modulation complexity score (MCS). Plan and Actual MLC positions, gantry angles and delivered fraction of monitor units were extracted from Varian DynaLog files. These factors were taken from the record and verify systemmore » of MLC control file. Planned and delivered beam data were compared to determine leaf position errors and gantry angle errors. Analysis was also performed on planned and actual fluence maps reconstructed from those of the DynaLog files. This analysis was performed for all treatment fractions of 5 prostate VMAT plans. The analysis of DynaLog files have been carried out by in-house programming in Visual C++. Results: The root mean square of leaf position and gantry angle errors were about 0.12 and 0.15, respectively. The Gamma of planned and actual fluence maps at 3%/3 mm criterion was about 99.21. The gamma of the leaf position errors were not directly related to plan complexity as determined by the MCS. Therefore, the gamma of the gantry angle errors were directly related to plan complexity as determined by the MCS. Conclusion: This study shows Varian dynalog files for VMAT plan can be diagnosed delivery errors not possible with phantom based quality assurance. Furthermore, the MCS of VMAT plan can evaluate delivery accuracy for patients receiving of VMAT. Machine performance was found to be directly related to plan complexity but this is not the dominant determinant of delivery accuracy.« less

  19. On Correlations, Distances and Error Rates.

    ERIC Educational Resources Information Center

    Dorans, Neil J.

    The nature of the criterion (dependent) variable may play a useful role in structuring a list of classification/prediction problems. Such criteria are continuous in nature, binary dichotomous, or multichotomous. In this paper, discussion is limited to the continuous normally distributed criterion scenarios. For both cases, it is assumed that the…

  20. A Novel Hybrid Error Criterion-Based Active Control Method for on-Line Milling Vibration Suppression with Piezoelectric Actuators and Sensors

    PubMed Central

    Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin

    2016-01-01

    Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448

  1. [Analysis of the impact of job characteristics and organizational support for workplace violence].

    PubMed

    Li, M L; Chen, P; Zeng, F H; Cui, Q L; Zeng, J; Zhao, X S; Li, Z N

    2017-12-20

    Objective: To analyze the effect of job characteristics and organizational support for workplace violence, explore the influence path and the theoretical model, and provide a theoretical basis for reducing workplace violence. Methods: Stratified random sampling was used to select 813 medical staff, conductors and bus drivers in Chongqing with a self-made questionnaire to investigate job characteristics, organization attitude toward workplace violence, workplace violence, fear of violence, workplace violence, etc from February to October, 2014. Amos 21.0 was used to analyze the path and to establish a theoretical model of workplace violence. Results: The odds ratio of work characteristics and organizational attitude to workplace violence were 6.033 and 0.669, respectively, and the path coefficients were 0.41 and-0.14, respectively ( P <0.05). The Fitting indexes of the model: Chi-square (χ(2)) =67.835, The ratio of the chi-square to the degree of freedom (χ(2)/df) =5.112, Good-of-fit index (GFI) =0.970, Adjusted good-of-fit index (AGFI) =0.945, Normed fit index (NFI) =0.923, Root mean square error of approximation (RMSEA) =0.071, Fit criterion (Fmin) =0.092, so the model fit well with the data. Conclusion: The job characteristic is a risk factor for workplace violence while organizational attitude is a protective factor for workplace violence, so changing the job characteristics and improving the enthusiasm of the organization to deal with workplace violence are conducive to reduce workplace violence and increase loyalty to the unit.

  2. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  3. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  5. Optimal sensor placement for spatial lattice structure based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian

    2008-10-01

    Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.

  6. Using the Criterion-Predictor Factor Model to Compute the Probability of Detecting Prediction Bias with Ordinary Least Squares Regression

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2012-01-01

    The study of prediction bias is important and the last five decades include research studies that examined whether test scores differentially predict academic or employment performance. Previous studies used ordinary least squares (OLS) to assess whether groups differ in intercepts and slopes. This study shows that OLS yields inaccurate inferences…

  7. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  8. A Review of the Proposed K (sub Isi) Offset-Secant Method for Size-Independent Linear-Elastic Toughness Evaluation

    NASA Technical Reports Server (NTRS)

    James, Mark; Wells, Doug; Allen, Phillip; Wallin, Kim

    2017-01-01

    The proposed size-independent linear-elastic fracture toughness, K (sub Isi), for potential inclusion in ASTM E399 targets a consistent 0.5 millimeters crack extension for all specimen sizes through an offset secant that is a function of the specimen ligament length. The K (sub Isi) method also includes an increase in allowable deformation, and the removal of the P (sub max)/P (sub Q) criterion. A finite element study of the K (sub Isi) test method confirms the viability of the increased deformation limit, but has also revealed a few areas of concern. Findings: 1. The deformation limit, b (sub o) greater than or equal to 1.1 times (K (sub I) divided by delta (sub ys) squared) maintains a K-dominant crack tip field with limited plastic contribution to the fracture energy; 2. The three dimensional effects on compliance and the shape of the force versus CMOD (Crack-Mouth Opening Displacement) trace are significant compared to a plane strain assumption; 3. The non-linearity in the force versus CMOD trace at deformations higher than the current limit of 2.5 times (K (sub I) divided by delta (sub ys) squared) is sufficient to introduce error or even "false calls" regarding crack extension when using a constant offset secant line. This issue is more significant for specimens with W (width) greater than or equal to 2 inches; 4. A non-linear plasticity correction factor in the offset secant may improve the viability of the method at deformations between 2.5 times (K (sub I) divided by delta (sub ys) squared) and 1.1 times (K (sub I) divided by delta (sub ys) squared).

  9. The impact of model prediction error in designing geodetic networks for crustal deformation applications

    NASA Astrophysics Data System (ADS)

    Murray, J. R.

    2017-12-01

    Earth surface displacements measured at Global Navigation Satellite System (GNSS) sites record crustal deformation due, for example, to slip on faults underground. A primary objective in designing geodetic networks to study crustal deformation is to maximize the ability to recover parameters of interest like fault slip. Given Green's functions (GFs) relating observed displacement to motion on buried dislocations representing a fault, one can use various methods to estimate spatially variable slip. However, assumptions embodied in the GFs, e.g., use of a simplified elastic structure, introduce spatially correlated model prediction errors (MPE) not reflected in measurement uncertainties (Duputel et al., 2014). In theory, selection algorithms should incorporate inter-site correlations to identify measurement locations that give unique information. I assess the impact of MPE on site selection by expanding existing methods (Klein et al., 2017; Reeves and Zhe, 1999) to incorporate this effect. Reeves and Zhe's algorithm sequentially adds or removes a predetermined number of data according to a criterion that minimizes the sum of squared errors (SSE) on parameter estimates. Adapting this method to GNSS network design, Klein et al. select new sites that maximize model resolution, using trade-off curves to determine when additional resolution gain is small. Their analysis uses uncorrelated data errors and GFs for a uniform elastic half space. I compare results using GFs for spatially variable strike slip on a discretized dislocation in a uniform elastic half space, a layered elastic half space, and a layered half space with inclusion of MPE. I define an objective criterion to terminate the algorithm once the next site removal would increase SSE more than the expected incremental SSE increase if all sites had equal impact. Using a grid of candidate sites with 8 km spacing, I find the relative value of the selected sites (defined by the percent increase in SSE that further removal of each site would cause) is more uniform when MPE is included. However, the number and distribution of selected sites depends primarily on site location relative to the fault. For this test case, inclusion of MPE has minimal practical impact; I will investigate whether these findings hold for more densely spaced candidate grids and dipping faults.

  10. Error Consistency in Acquired Apraxia of Speech with Aphasia: Effects of the Analysis Unit

    ERIC Educational Resources Information Center

    Haley, Katarina L.; Cunningham, Kevin T.; Eaton, Catherine Torrington; Jacks, Adam

    2018-01-01

    Purpose: Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain…

  11. Developing a tool to measure satisfaction among health professionals in sub-Saharan Africa

    PubMed Central

    2013-01-01

    Background In sub-Saharan Africa, lack of motivation and job dissatisfaction have been cited as causes of poor healthcare quality and outcomes. Measurement of health workers’ satisfaction adapted to sub-Saharan African working conditions and cultures is a challenge. The objective of this study was to develop a valid and reliable instrument to measure satisfaction among health professionals in the sub-Saharan African context. Methods A survey was conducted in Senegal and Mali in 2011 among 962 care providers (doctors, midwives, nurses and technicians) practicing in 46 hospitals (capital, regional and district). The participation rate was very high: 97% (937/962). After exploratory factor analysis (EFA), construct validity was assessed through confirmatory factor analysis (CFA). The discriminant validity of our subscales was evaluated by comparing the average variance extracted (AVE) for each of the constructs with the squared interconstruct correlation (SIC), and finally for criterion validity, each subscale was tested with two hypotheses. Two dimensions of reliability were assessed: internal consistency with Cronbach’s alpha subscales and stability over time using a test-retest process. Results Eight dimensions of satisfaction encompassing 24 items were identified and validated using a process that combined psychometric analyses and expert opinions: continuing education, salary and benefits, management style, tasks, work environment, workload, moral satisfaction and job stability. All eight dimensions demonstrated significant discriminant validity. The final model showed good performance, with a root mean square error of approximation (RMSEA) of 0.0508 (90% CI: 0.0448 to 0.0569) and a comparative fit index (CFI) of 0.9415. The concurrent criterion validity of the eight dimensions was good. Reliability was assessed based on internal consistency, which was good for all dimensions but one (moral satisfaction < 0.70). Test-retest showed satisfactory temporal stability (intra class coefficient range: 0.60 to 0.91). Conclusions Job satisfaction is a complex construct; this study provides a multidimensional instrument whose content, construct and criterion validities were verified to ensure its suitability for the sub-Saharan African context. When using these subscales in further studies, the variability of the reliability of the subscales should be taken in to account for calculating the sample sizes. The instrument will be useful in evaluative studies which will help guide interventions aimed at improving both the quality of care and its effectiveness. PMID:23826720

  12. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE PAGES

    Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...

    2018-04-06

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  13. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Qiang; Du, Qizhen; Gong, Xufei

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  14. Microwave Photonic Architecture for Direction Finding of LPI Emitters: Post-Processing for Angle of Arrival Estimation

    DTIC Science & Technology

    2016-09-01

    mean- square (RMS) error of 0.29° at ə° resolution. For a P4 coded signal, the RMS error in estimating the AOA is 0.32° at 1° resolution. 14...FMCW signal, it was demonstrated that the system is capable of estimating the AOA with a root-mean- square (RMS) error of 0.29° at ə° resolution. For a...Modulator PCB printed circuit board PD photodetector RF radio frequency RMS root-mean- square xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii

  15. Automated Intelligibility Assessment of Pathological Speech Using Phonological Features

    NASA Astrophysics Data System (ADS)

    Middag, Catherine; Martens, Jean-Pierre; Van Nuffelen, Gwen; De Bodt, Marc

    2009-12-01

    It is commonly acknowledged that word or phoneme intelligibility is an important criterion in the assessment of the communication efficiency of a pathological speaker. People have therefore put a lot of effort in the design of perceptual intelligibility rating tests. These tests usually have the drawback that they employ unnatural speech material (e.g., nonsense words) and that they cannot fully exclude errors due to listener bias. Therefore, there is a growing interest in the application of objective automatic speech recognition technology to automate the intelligibility assessment. Current research is headed towards the design of automated methods which can be shown to produce ratings that correspond well with those emerging from a well-designed and well-performed perceptual test. In this paper, a novel methodology that is built on previous work (Middag et al., 2008) is presented. It utilizes phonological features, automatic speech alignment based on acoustic models that were trained on normal speech, context-dependent speaker feature extraction, and intelligibility prediction based on a small model that can be trained on pathological speech samples. The experimental evaluation of the new system reveals that the root mean squared error of the discrepancies between perceived and computed intelligibilities can be as low as 8 on a scale of 0 to 100.

  16. Assessing agreement between malaria slide density readings.

    PubMed

    Alexander, Neal; Schellenberg, David; Ngasala, Billy; Petzold, Max; Drakeley, Chris; Sutherland, Colin

    2010-01-04

    Several criteria have been used to assess agreement between replicate slide readings of malaria parasite density. Such criteria may be based on percent difference, or absolute difference, or a combination. Neither the rationale for choosing between these types of criteria, nor that for choosing the magnitude of difference which defines acceptable agreement, are clear. The current paper seeks a procedure which avoids the disadvantages of these current options and whose parameter values are more clearly justified. Variation of parasite density within a slide is expected, even when it has been prepared from a homogeneous sample. This places lower limits on sensitivity and observer agreement, quantified by the Poisson distribution. This means that, if a criterion of fixed percent difference criterion is used for satisfactory agreement, the number of discrepant readings is over-estimated at low parasite densities. With a criterion of fixed absolute difference, the same happens at high parasite densities. For an ideal slide, following the Poisson distribution, a criterion based on a constant difference in square root counts would apply for all densities. This can be back-transformed to a difference in absolute counts, which, as expected, gives a wider range of acceptable agreement at higher average densities. In an example dataset from Tanzania, observed differences in square root counts correspond to a 95% limits of agreement of -2,800 and +2,500 parasites/microl at average density of 2,000 parasites/microl, and -6,200 and +5,700 parasites/microl at 10,000 parasites/microl. However, there were more outliers beyond those ranges at higher densities, meaning that actual coverage of these ranges was not a constant 95%, but decreased with density. In a second study, a trial of microscopist training, the corresponding ranges of agreement are wider and asymmetrical: -8,600 to +5,200/microl, and -19,200 to +11,700/microl, respectively. By comparison, the optimal limits of agreement, corresponding to Poisson variation, are +/- 780 and +/- 1,800 parasites/microl, respectively. The focus of this approach on the volume of blood read leads to other conclusions. For example, no matter how large a volume of blood is read, some densities are too low to be reliably detected, which in turn means that disagreements on slide positivity may simply result from within-slide variation, rather than reading errors. The proposed method defines limits of acceptable agreement in a way which allows for the natural increase in variability with parasite density. This includes defining the levels of between-reader variability, which are consistent with random variation: disagreements within these limits should not trigger additional readings. This approach merits investigation in other settings, in order to determine both the extent of its applicability, and appropriate numerical values for limits of agreement.

  17. Adaptive Digital Signature Design and Short-Data-Record Adaptive Filtering

    DTIC Science & Technology

    2008-04-01

    rate BPSK binary phase shift keying CA − CFAR cell averaging− constant false alarm rate CDMA code − division multiple − access CFAR constant false...Cotae, “Spreading sequence design for multiple cell synchronous DS-CDMA systems under total weighted squared correlation criterion,” EURASIP Journal...415-428, Mar. 2002. [6] P. Cotae, “Spreading sequence design for multiple cell synchronous DS-CDMA systems under total weighted squared correlation

  18. Determination of suitable drying curve model for bread moisture loss during baking

    NASA Astrophysics Data System (ADS)

    Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.

    2013-03-01

    This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.

  19. Stress regularity in quasi-static perfect plasticity with a pressure dependent yield criterion

    NASA Astrophysics Data System (ADS)

    Babadjian, Jean-François; Mora, Maria Giovanna

    2018-04-01

    This work is devoted to establishing a regularity result for the stress tensor in quasi-static planar isotropic linearly elastic - perfectly plastic materials obeying a Drucker-Prager or Mohr-Coulomb yield criterion. Under suitable assumptions on the data, it is proved that the stress tensor has a spatial gradient that is locally squared integrable. As a corollary, the usual measure theoretical flow rule is expressed in a strong form using the quasi-continuous representative of the stress.

  20. Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.

    PubMed

    Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing

    2009-08-21

    Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.

  1. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  2. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  3. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  4. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China.

    PubMed

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-09-19

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.

  5. Analysis of positron lifetime spectra in polymers

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Mall, Gerald H.; Sprinkle, Danny R.

    1988-01-01

    A new procedure for analyzing multicomponent positron lifetime spectra in polymers was developed. It requires initial estimates of the lifetimes and the intensities of various components, which are readily obtainable by a standard spectrum stripping process. These initial estimates, after convolution with the timing system resolution function, are then used as the inputs for a nonlinear least squares analysis to compute the estimates that conform to a global error minimization criterion. The convolution integral uses the full experimental resolution function, in contrast to the previous studies where analytical approximations of it were utilized. These concepts were incorporated into a generalized Computer Program for Analyzing Positron Lifetime Spectra (PAPLS) in polymers. Its validity was tested using several artificially generated data sets. These data sets were also analyzed using the widely used POSITRONFIT program. In almost all cases, the PAPLS program gives closer fit to the input values. The new procedure was applied to the analysis of several lifetime spectra measured in metal ion containing Epon-828 samples. The results are described.

  6. A regularization approach to continuous learning with an application to financial derivatives pricing.

    PubMed

    Ormoneit, D

    1999-12-01

    We consider the training of neural networks in cases where the nonlinear relationship of interest gradually changes over time. One possibility to deal with this problem is by regularization where a variation penalty is added to the usual mean squared error criterion. To learn the regularized network weights we suggest the Iterative Extended Kalman Filter (IEKF) as a learning rule, which may be derived from a Bayesian perspective on the regularization problem. A primary application of our algorithm is in financial derivatives pricing, where neural networks may be used to model the dependency of the derivatives' price on one or several underlying assets. After giving a brief introduction to the problem of derivatives pricing we present experiments with German stock index options data showing that a regularized neural network trained with the IEKF outperforms several benchmark models and alternative learning procedures. In particular, the performance may be greatly improved using a newly designed neural network architecture that accounts for no-arbitrage pricing restrictions.

  7. Geostatistical modeling of riparian forest microclimate and its implications for sampling

    USGS Publications Warehouse

    Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.

    2011-01-01

    Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.

  8. Interferometric superlocalization of two incoherent optical point sources.

    PubMed

    Nair, Ranjith; Tsang, Mankei

    2016-02-22

    A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.

  9. Utilization of integrated Michaelis-Menten equations for enzyme inhibition diagnosis and determination of kinetic constants using Solver supplement of Microsoft Office Excel.

    PubMed

    Bezerra, Rui M F; Fraga, Irene; Dias, Albino A

    2013-01-01

    Enzyme kinetic parameters are usually determined from initial rates nevertheless, laboratory instruments only measure substrate or product concentration versus reaction time (progress curves). To overcome this problem we present a methodology which uses integrated models based on Michaelis-Menten equation. The most severe practical limitation of progress curve analysis occurs when the enzyme shows a loss of activity under the chosen assay conditions. To avoid this problem it is possible to work with the same experimental points utilized for initial rates determination. This methodology is illustrated by the use of integrated kinetic equations with the well-known reaction catalyzed by alkaline phosphatase enzyme. In this work nonlinear regression was performed with the Solver supplement (Microsoft Office Excel). It is easy to work with and track graphically the convergence of SSE (sum of square errors). The diagnosis of enzyme inhibition was performed according to Akaike information criterion. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. MWR3C physical retrievals of precipitable water vapor and cloud liquid water path

    DOE Data Explorer

    Cadeddu, Maria

    2016-10-12

    The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.

  11. A robust nonlinear filter for image restoration.

    PubMed

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  12. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  13. Waveform Optimization for Target Estimation by Cognitive Radar with Multiple Antennas.

    PubMed

    Yao, Yu; Zhao, Junhui; Wu, Lenan

    2018-05-29

    A new scheme based on Kalman filtering to optimize the waveforms of an adaptive multi-antenna radar system for target impulse response (TIR) estimation is presented. This work aims to improve the performance of TIR estimation by making use of the temporal correlation between successive received signals, and minimize the mean square error (MSE) of TIR estimation. The waveform design approach is based upon constant learning from the target feature at the receiver. Under the multiple antennas scenario, a dynamic feedback loop control system is established to real-time monitor the change in the target features extracted form received signals. The transmitter adapts its transmitted waveform to suit the time-invariant environment. Finally, the simulation results show that, as compared with the waveform design method based on the MAP criterion, the proposed waveform design algorithm is able to improve the performance of TIR estimation for extended targets with multiple iterations, and has a relatively lower level of complexity.

  14. The Short-Term and Long-Term Effects of AWE Feedback on ESL Students' Development of Grammatical Accuracy

    ERIC Educational Resources Information Center

    Li, Zhi; Feng, Hui-Hsien; Saricaoglu, Aysel

    2017-01-01

    This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students' development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight…

  15. Correcting Four Similar Correlational Measures for Attenuation Due to Errors of Measurement in the Dependent Variable: Eta, Epsilon, Omega, and Intraclass r.

    ERIC Educational Resources Information Center

    Stanley, Julian C.; Livingston, Samuel A.

    Besides the ubiquitous Pearson product-moment r, there are a number of other measures of relationship that are attenuated by errors of measurement and for which the relationship between true measures can be estimated. Among these are the correlation ratio (eta squared), Kelley's unbiased correlation ratio (epsilon squared), Hays' omega squared,…

  16. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    NASA Astrophysics Data System (ADS)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-04-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.

  17. Entanglement-enhanced Neyman-Pearson target detection using quantum illumination

    NASA Astrophysics Data System (ADS)

    Zhuang, Quntao; Zhang, Zheshen; Shapiro, Jeffrey H.

    2017-08-01

    Quantum illumination (QI) provides entanglement-based target detection---in an entanglement-breaking environment---whose performance is significantly better than that of optimum classical-illumination target detection. QI's performance advantage was established in a Bayesian setting with the target presumed equally likely to be absent or present and error probability employed as the performance metric. Radar theory, however, eschews that Bayesian approach, preferring the Neyman-Pearson performance criterion to avoid the difficulties of accurately assigning prior probabilities to target absence and presence and appropriate costs to false-alarm and miss errors. We have recently reported an architecture---based on sum-frequency generation (SFG) and feedforward (FF) processing---for minimum error-probability QI target detection with arbitrary prior probabilities for target absence and presence. In this paper, we use our results for FF-SFG reception to determine the receiver operating characteristic---detection probability versus false-alarm probability---for optimum QI target detection under the Neyman-Pearson criterion.

  18. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  19. Secret Sharing of a Quantum State.

    PubMed

    Lu, He; Zhang, Zhen; Chen, Luo-Kan; Li, Zheng-Da; Liu, Chang; Li, Li; Liu, Nai-Le; Ma, Xiongfeng; Chen, Yu-Ao; Pan, Jian-Wei

    2016-07-15

    Secret sharing of a quantum state, or quantum secret sharing, in which a dealer wants to share a certain amount of quantum information with a few players, has wide applications in quantum information. The critical criterion in a threshold secret sharing scheme is confidentiality: with less than the designated number of players, no information can be recovered. Furthermore, in a quantum scenario, one additional critical criterion exists: the capability of sharing entangled and unknown quantum information. Here, by employing a six-photon entangled state, we demonstrate a quantum threshold scheme, where the shared quantum secrecy can be efficiently reconstructed with a state fidelity as high as 93%. By observing that any one or two parties cannot recover the secrecy, we show that our scheme meets the confidentiality criterion. Meanwhile, we also demonstrate that entangled quantum information can be shared and recovered via our setting, which shows that our implemented scheme is fully quantum. Moreover, our experimental setup can be treated as a decoding circuit of the five-qubit quantum error-correcting code with two erasure errors.

  20. Four-Dimensional Golden Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.

    2015-02-25

    The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less

  1. A Survey of Terrain Modeling Technologies and Techniques

    DTIC Science & Technology

    2007-09-01

    Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM

  2. Accuracy of Area at Risk Quantification by Cardiac Magnetic Resonance According to the Myocardial Infarction Territory.

    PubMed

    Fernández-Friera, Leticia; García-Ruiz, José Manuel; García-Álvarez, Ana; Fernández-Jiménez, Rodrigo; Sánchez-González, Javier; Rossello, Xavier; Gómez-Talavera, Sandra; López-Martín, Gonzalo J; Pizarro, Gonzalo; Fuster, Valentín; Ibáñez, Borja

    2017-05-01

    Area at risk (AAR) quantification is important to evaluate the efficacy of cardioprotective therapies. However, postinfarction AAR assessment could be influenced by the infarcted coronary territory. Our aim was to determine the accuracy of T 2 -weighted short tau triple-inversion recovery (T 2 W-STIR) cardiac magnetic resonance (CMR) imaging for accurate AAR quantification in anterior, lateral, and inferior myocardial infarctions. Acute reperfused myocardial infarction was experimentally induced in 12 pigs, with 40-minute occlusion of the left anterior descending (n = 4), left circumflex (n = 4), and right coronary arteries (n = 4). Perfusion CMR was performed during selective intracoronary gadolinium injection at the coronary occlusion site (in vivo criterion standard) and, additionally, a 7-day CMR, including T 2 W-STIR sequences, was performed. Finally, all animals were sacrificed and underwent postmortem Evans blue staining (classic criterion standard). The concordance between the CMR-based criterion standard and T 2 W-STIR to quantify AAR was high for anterior and inferior infarctions (r = 0.73; P = .001; mean error = 0.50%; limits = -12.68%-13.68% and r = 0.87; P = .001; mean error = -1.5%; limits = -8.0%-5.8%, respectively). Conversely, the correlation for the circumflex territories was poor (r = 0.21, P = .37), showing a higher mean error and wider limits of agreement. A strong correlation between pathology and the CMR-based criterion standard was observed (r = 0.84, P < .001; mean error = 0.91%; limits = -7.55%-9.37%). T 2 W-STIR CMR sequences are accurate to determine the AAR for anterior and inferior infarctions; however, their accuracy for lateral infarctions is poor. These findings may have important implications for the design and interpretation of clinical trials evaluating the effectiveness of cardioprotective therapies. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  3. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  4. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  5. Examining recognition criterion rigidity during testing using a biased feedback technique: Evidence for adaptive criterion learning

    PubMed Central

    Han, Sanghoon; Dobbins, Ian G.

    2009-01-01

    Recognition models often assume that subjects use specific evidence values (decision criteria) to adaptively parse continuous memory evidence into response categories (e.g., “old” or “new”). Although explicit pre-test instructions influence criterion placement, these criteria appear extremely resistant to change once testing begins. We tested criterion sensitivity to local feedback using a novel, biased feedback technique designed to tacitly encourage certain errors by indicating they were correct choices. Experiment 1 demonstrated that fully correct feedback had little effect on criterion placement, whereas biased feedback during Experiments 2 and 3 yielded prominent, durable, and adaptive criterion shifts, with observers reporting they were unaware of the manipulation in Experiment 3. These data suggest recognition criteria can be easily modified during testing through a form of feedback learning that operates independent of stimulus characteristics and observer awareness of the nature of the manipulation. This mechanism may be fundamentally different than criterion shifts following explicit instructions and warnings, or shifts linked to manipulations of stimulus characteristics combined with feedback highlighting those manipulations. PMID:18604954

  6. Video-task acquisition in rhesus monkeys (Macaca mulatta) and chimpanzees (Pan troglodytes): a comparative analysis

    NASA Technical Reports Server (NTRS)

    Hopkins, W. D.; Washburn, D. A.; Hyatt, C. W.; Rumbaugh, D. M. (Principal Investigator)

    1996-01-01

    This study describes video-task acquisition in two nonhuman primate species. The subjects were seven rhesus monkeys (Macaca mulatta) and seven chimpanzees (Pan troglodytes). All subjects were trained to manipulate a joystick which controlled a cursor displayed on a computer monitor. Two criterion levels were used: one based on conceptual knowledge of the task and one based on motor performance. Chimpanzees and rhesus monkeys attained criterion in a comparable number of trials using a conceptually based criterion. However, using a criterion based on motor performance, chimpanzees reached criterion significantly faster than rhesus monkeys. Analysis of error patterns and latency indicated that the rhesus monkeys had a larger asymmetry in response bias and were significantly slower in responding than the chimpanzees. The results are discussed in terms of the relation between object manipulation skills and video-task acquisition.

  7. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  8. Numerical stability of the error diffusion concept

    NASA Astrophysics Data System (ADS)

    Weissbach, Severin; Wyrowski, Frank

    1992-10-01

    The error diffusion algorithm is an easy implementable mean to handle nonlinearities in signal processing, e.g. in picture binarization and coding of diffractive elements. The numerical stability of the algorithm depends on the choice of the diffusion weights. A criterion for the stability of the algorithm is presented and evaluated for some examples.

  9. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    NASA Astrophysics Data System (ADS)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  10. Semivariogram modeling by weighted least squares

    USGS Publications Warehouse

    Jian, X.; Olea, R.A.; Yu, Y.-S.

    1996-01-01

    Permissible semivariogram models are fundamental for geostatistical estimation and simulation of attributes having a continuous spatiotemporal variation. The usual practice is to fit those models manually to experimental semivariograms. Fitting by weighted least squares produces comparable results to fitting manually in less time, systematically, and provides an Akaike information criterion for the proper comparison of alternative models. We illustrate the application of a computer program with examples showing the fitting of simple and nested models. Copyright ?? 1996 Elsevier Science Ltd.

  11. A new statistic to express the uncertainty of kriging predictions for purposes of survey planning.

    NASA Astrophysics Data System (ADS)

    Lark, R. M.; Lapworth, D. J.

    2014-05-01

    It is well-known that one advantage of kriging for spatial prediction is that, given the random effects model, the prediction error variance can be computed a priori for alternative sampling designs. This allows one to compare sampling schemes, in particular sampling at different densities, and so to decide on one which meets requirements in terms of the uncertainty of the resulting predictions. However, the planning of sampling schemes must account not only for statistical considerations, but also logistics and cost. This requires effective communication between statisticians, soil scientists and data users/sponsors such as managers, regulators or civil servants. In our experience the latter parties are not necessarily able to interpret the prediction error variance as a measure of uncertainty for decision making. In some contexts (particularly the solution of very specific problems at large cartographic scales, e.g. site remediation and precision farming) it is possible to translate uncertainty of predictions into a loss function directly comparable with the cost incurred in increasing precision. Often, however, sampling must be planned for more generic purposes (e.g. baseline or exploratory geochemical surveys). In this latter context the prediction error variance may be of limited value to a non-statistician who has to make a decision on sample intensity and associated cost. We propose an alternative criterion for these circumstances to aid communication between statisticians and data users about the uncertainty of geostatistical surveys based on different sampling intensities. The criterion is the consistency of estimates made from two non-coincident instantiations of a proposed sample design. We consider square sample grids, one instantiation is offset from the second by half the grid spacing along the rows and along the columns. If a sample grid is coarse relative to the important scales of variation in the target property then the consistency of predictions from two instantiations is expected to be small, and can be increased by reducing the grid spacing. The measure of consistency is the correlation between estimates from the two instantiations of the sample grid, averaged over a grid cell. We call this the offset correlation, it can be calculated from the variogram. We propose that this measure is easier to grasp intuitively than the prediction error variance, and has the advantage of having an upper bound (1.0) which will aid its interpretation. This quality measure is illustrated for some hypothetical examples, considering both ordinary kriging and factorial kriging of the variable of interest. It is also illustrated using data on metal concentrations in the soil of north-east England.

  12. Comparative Time Series Analysis of Aerosol Optical Depth over Sites in United States and China Using ARIMA Modeling

    NASA Astrophysics Data System (ADS)

    Li, X.; Zhang, C.; Li, W.

    2017-12-01

    Long-term spatiotemporal analysis and modeling of aerosol optical depth (AOD) distribution is of paramount importance to study radiative forcing, climate change, and human health. This study is focused on the trends and variations of AOD over six stations located in United States and China during 2003 to 2015, using satellite-retrieved Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 retrievals and ground measurements derived from Aerosol Robotic NETwork (AERONET). An autoregressive integrated moving average (ARIMA) model is applied to simulate and predict AOD values. The R2, adjusted R2, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Bayesian Information Criterion (BIC) are used as indices to select the best fitted model. Results show that there is a persistent decreasing trend in AOD for both MODIS data and AERONET data over three stations. Monthly and seasonal AOD variations reveal consistent aerosol patterns over stations along mid-latitudes. Regional differences impacted by climatology and land cover types are observed for the selected stations. Statistical validation of time series models indicates that the non-seasonal ARIMA model performs better for AERONET AOD data than for MODIS AOD data over most stations, suggesting the method works better for data with higher quality. By contrast, the seasonal ARIMA model reproduces the seasonal variations of MODIS AOD data much more precisely. Overall, the reasonably predicted results indicate the applicability and feasibility of the stochastic ARIMA modeling technique to forecast future and missing AOD values.

  13. Life stress as a determinant of emotional well-being: development and validation of a Spanish-Language Checklist of Stressful Life Events

    PubMed Central

    Morote Rios, Roxanna; Hjemdal, Odin; Martinez Uribe, Patricia; Corveleyn, Jozef

    2014-01-01

    Objectives: To develop a screening instrument for investigating the prevalence and impact of stressful life events in Spanish-speaking Peruvian adults. Background: Researchers have demonstrated the causal connection between life stress and psychosocial and physical complaints. The need for contextually relevant and updated instruments has been also addressed. Methods: A sequential exploratory design combined qualitative and quantitative information from two studies: first, the content validity of 20 severe stressors (N = 46); then, a criterion-related validity process with affective symptoms as criteria (Hopkins Symptom Checklist (HSCL-25), N = 844). Results: 93% of the participants reported one to eight life events (X = 3.93, Mdn = 3, SD = 7.77). Events increase significantly until 60 years of age (Mdn = 6). Adults born in inland regions (Mdn = 4) or with secondary or technical education (Mdn = 5) reported significantly more stressors than participants born in Lima or with higher education. There are no differences by gender. Four-step hierarchical models showed that life stress is the best unique predictor (β) of HSCL anxiety, depression and general distress (p < .001). Age and gender are significant for the three criteria (p < .01, p < .001); lower education and unemployment are significant unique predictors of general distress and depression (p < .01; p < .05). Previously, the two-factor structure of the HSCL-25 was verified (Satorra–Bentler chi-square, root-mean-square error of approximation = 0.059; standardized root-mean-square residual = 0.055). Conclusion: The Spanish-Language Checklist of Stressful Life Events is a valid instrument to identify adults with significant levels of life stress and possible risk for mental and physical health (clinical utility). PMID:25750790

  14. Life stress as a determinant of emotional well-being: development and validation of a Spanish-Language Checklist of Stressful Life Events.

    PubMed

    Morote Rios, Roxanna; Hjemdal, Odin; Martinez Uribe, Patricia; Corveleyn, Jozef

    2014-01-01

    Objectives : To develop a screening instrument for investigating the prevalence and impact of stressful life events in Spanish-speaking Peruvian adults. Background : Researchers have demonstrated the causal connection between life stress and psychosocial and physical complaints. The need for contextually relevant and updated instruments has been also addressed. Methods : A sequential exploratory design combined qualitative and quantitative information from two studies: first, the content validity of 20 severe stressors ( N  = 46); then, a criterion-related validity process with affective symptoms as criteria (Hopkins Symptom Checklist (HSCL-25), N  = 844). Results : 93% of the participants reported one to eight life events ( X  = 3.93, Mdn = 3, SD = 7.77). Events increase significantly until 60 years of age (Mdn = 6). Adults born in inland regions (Mdn = 4) or with secondary or technical education (Mdn = 5) reported significantly more stressors than participants born in Lima or with higher education. There are no differences by gender. Four-step hierarchical models showed that life stress is the best unique predictor ( β ) of HSCL anxiety, depression and general distress ( p  < .001). Age and gender are significant for the three criteria ( p  < .01, p < .001); lower education and unemployment are significant unique predictors of general distress and depression ( p  < .01; p  < .05). Previously, the two-factor structure of the HSCL-25 was verified (Satorra-Bentler chi-square, root-mean-square error of approximation = 0.059; standardized root-mean-square residual = 0.055). Conclusion : The Spanish-Language Checklist of Stressful Life Events is a valid instrument to identify adults with significant levels of life stress and possible risk for mental and physical health (clinical utility).

  15. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  16. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  17. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  18. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  19. Moderation analysis using a two-level regression model.

    PubMed

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  20. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    PubMed

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  1. Quality assessment of MEG-to-MRI coregistrations

    NASA Astrophysics Data System (ADS)

    Sonntag, Hermann; Haueisen, Jens; Maess, Burkhard

    2018-04-01

    For high precision in source reconstruction of magnetoencephalography (MEG) or electroencephalography data, high accuracy of the coregistration of sources and sensors is mandatory. Usually, the source space is derived from magnetic resonance imaging (MRI). In most cases, however, no quality assessment is reported for sensor-to-MRI coregistrations. If any, typically root mean squares (RMS) of point residuals are provided. It has been shown, however, that RMS of residuals do not correlate with coregistration errors. We suggest using target registration error (TRE) as criterion for the quality of sensor-to-MRI coregistrations. TRE measures the effect of uncertainty in coregistrations at all points of interest. In total, 5544 data sets with sensor-to-head and 128 head-to-MRI coregistrations, from a single MEG laboratory, were analyzed. An adaptive Metropolis algorithm was used to estimate the optimal coregistration and to sample the coregistration parameters (rotation and translation). We found an average TRE between 1.3 and 2.3 mm at the head surface. Further, we observed a mean absolute difference in coregistration parameters between the Metropolis and iterative closest point algorithm of (1.9 +/- 15){\\hspace{0pt}}\\circ and (1.1 +/- 9) m. A paired sample t-test indicated a significant improvement in goal function minimization by using the Metropolis algorithm. The sampled parameters allowed computation of TRE on the entire grid of the MRI volume. Hence, we recommend the Metropolis algorithm for head-to-MRI coregistrations.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kato, Kentaro

    An optimal quantum measurement is considered for the so-called quasi-Bell states under the quantum minimax criterion. It is shown that the minimax-optimal POVM for the quasi-Bell states is given by its square-root measurement and is applicable to the teleportation of a superposition of two coherent states.

  3. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less

  4. Overcoming Species Boundaries in Peptide Identification with Bayesian Information Criterion-driven Error-tolerant Peptide Search (BICEPS)*

    PubMed Central

    Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno

    2012-01-01

    Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179

  5. A Very Efficient Transfer Function Bounding Technique on Bit Error Rate for Viterbi Decoded, Rate 1/N Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Lee, P. J.

    1984-01-01

    For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.

  6. Three filters for visualization of phase objects with large variations of phase gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagan, Arkadiusz; Antosiewicz, Tomasz J.; Szoplik, Tomasz

    2009-02-20

    We propose three amplitude filters for visualization of phase objects. They interact with the spectra of pure-phase objects in the frequency plane and are based on tangent and error functions as well as antisymmetric combination of square roots. The error function is a normalized form of the Gaussian function. The antisymmetric square-root filter is composed of two square-root filters to widen its spatial frequency spectral range. Their advantage over other known amplitude frequency-domain filters, such as linear or square-root graded ones, is that they allow high-contrast visualization of objects with large variations of phase gradients.

  7. Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction.

    PubMed

    Heintz, Brittany D; Keenan, Kevin G

    2018-01-01

    Dexterity impairments are well documented in older adults, though it is unclear how these influence touchscreen manipulation. This study examined age-related differences while tracing on high- and low-friction touchscreens using the finger or stylus. 26 young and 24 older adults completed an Archimedes spiral tracing task on a touchscreen mounted on a force sensor. Root mean square error was calculated to quantify performance. Root mean square error increased by 29.9% for older vs. young adults using the fingertip, but was similar to young adults when using the stylus. Although other variables (e.g., touchscreen usage, sensation, and reaction time) differed between age groups, these variables were not related to increased error in older adults while using their fingertip. Root mean square error also increased on the low-friction surface for all subjects. These findings suggest that utilizing a stylus and increasing surface friction may improve touchscreen use in older adults.

  8. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  9. Some Results on Mean Square Error for Factor Score Prediction

    ERIC Educational Resources Information Center

    Krijnen, Wim P.

    2006-01-01

    For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…

  10. Weighted linear regression using D2H and D2 as the independent variables

    Treesearch

    Hans T. Schreuder; Michael S. Williams

    1998-01-01

    Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...

  11. The Relationship between Root Mean Square Error of Approximation and Model Misspecification in Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2012-01-01

    The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…

  12. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  13. Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values

    DTIC Science & Technology

    2016-12-01

    UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square  error (MMSE)  estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem.                     3    Introduction      Minimum mean‐ square  error (MMSE) estimation is applied to target imaging with synthetic aperture

  14. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  15. Adaptive slab laser beam quality improvement using a weighted least-squares reconstruction algorithm.

    PubMed

    Chen, Shanqiu; Dong, LiZhi; Chen, XiaoJun; Tan, Yi; Liu, Wenjin; Wang, Shuai; Yang, Ping; Xu, Bing; Ye, YuTang

    2016-04-10

    Adaptive optics is an important technology for improving beam quality in solid-state slab lasers. However, there are uncorrectable aberrations in partial areas of the beam. In the criterion of the conventional least-squares reconstruction method, it makes the zones with small aberrations nonsensitive and hinders this zone from being further corrected. In this paper, a weighted least-squares reconstruction method is proposed to improve the relative sensitivity of zones with small aberrations and to further improve beam quality. Relatively small weights are applied to the zones with large residual aberrations. Comparisons of results show that peak intensity in the far field improved from 1242 analog digital units (ADU) to 2248 ADU, and beam quality β improved from 2.5 to 2.0. This indicates the weighted least-squares method has better performance than the least-squares reconstruction method when there are large zonal uncorrectable aberrations in the slab laser system.

  16. Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research.

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Hullett, Craig R.

    2002-01-01

    Alerts communication researchers to potential errors stemming from the use of SPSS (Statistical Package for the Social Sciences) to obtain estimates of eta squared in analysis of variance (ANOVA). Strives to clarify issues concerning the development and appropriate use of eta squared and partial eta squared in ANOVA. Discusses the reporting of…

  17. In-vivo digital wavefront sensing using swept source OCT

    PubMed Central

    Kumar, Abhishek; Wurster, Lara M.; Salas, Matthias; Ginner, Laurin; Drexler, Wolfgang; Leitgeb, Rainer A.

    2017-01-01

    Sub-aperture based digital adaptive optics is demonstrated in a fiber based point scanning optical coherence tomography system using a 1060 nm swept source laser. To detect optical aberrations in-vivo, a small lateral field of view of ~150×150 μm2 is scanned on the sample at a high volume rate of 17 Hz (~1.3 kHz B-scan rate) to avoid any significant lateral and axial motion of the sample, and is used as a “guide star” for the sub-aperture based DAO. The proof of principle is demonstrated using a micro-beads phantom sample, wherein a significant root mean square wavefront error (RMS WFE) of 1.48 waves (> 1μm) is detected. In-vivo aberration measurement with a RMS WFE of 0.33 waves, which is ~5 times higher than the Marechal’s criterion of 1/14 waves for the diffraction limited performance, is shown for a human retinal OCT. Attempt has been made to validate the experimental results with the conventional Shack-Hartmann wavefront sensor within reasonable limitations. PMID:28717573

  18. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  19. Prediction of acoustic feature parameters using myoelectric signals.

    PubMed

    Lee, Ki-Seung

    2010-07-01

    It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.

  20. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  1. Potential Seasonal Terrestrial Water Storage Monitoring from GPS Vertical Displacements: A Case Study in the Lower Three-Rivers Headwater Region, China

    PubMed Central

    Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang

    2016-01-01

    This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2–3.9 cm and 4.8–5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8–24.7 cm and a minimum of 3.1–6.9 cm. PMID:27657064

  2. Improved cutback method measuring beat-length for high-birefringence optical fiber by fitting data of photoelectric signal

    NASA Astrophysics Data System (ADS)

    Shi, Zhi-Dong; Lin, Jian-Qiang; Bao, Huan-Huan; Liu, Shu; Xiang, Xue-Nong

    2008-03-01

    A photoelectric measurement system for measuring the beat length of birefringence fiber is set up including a set of rotating-wave-plate polarimeter using single photodiode. And two improved cutback methods suitable for measuring beat-length within millimeter range of high birefringence fiber are proposed through data processing technique. The cut length needs not to be restricted shorter than one centimeter so that the auto-cleaving machine is freely used, and no need to carefully operate the manually cleaving blade with low efficiency and poor success. The first method adopts the parameter-fitting to a saw-tooth function of tried beat length by the criterion of minimum square deviations, without special limitation on the cut length. The second method adopts linear-fitting in the divided length ranges, only restrict condition is the increment between different cut lengths less than one beat-length. For a section of holey high-birefringence fiber, we do experiments respectively by the two methods. The detecting error of beat-length is discussed and the advantage is compared.

  3. Algorithmic Classification of Five Characteristic Types of Paraphasias.

    PubMed

    Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven

    2016-12-01

    This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.

  4. Chinese cross-cultural adaptation and validation of the Foot Function Index as tool to measure patients with foot and ankle functional limitations.

    PubMed

    González-Sánchez, Manuel; Ruiz-Muñoz, Maria; Li, Guang Zhi; Cuesta-Vargas, Antonio I

    2018-08-01

    To perform a cross-cultural adaptation and validation of the Foot Function Index (FFI) questionnaire to develop the Chinese version. Three hundred and six patients with foot and ankle neuromusculoskeletal diseases participated in this observational study. Construct validity, internal consistency and criterion validity were calculated for the FFI Chinese version after the translation and transcultural adaptation process. Internal consistency ranged from 0.996 to 0.998. Test-retest analysis ranged from 0.985 to 0.994; minimal detectable change 90: 2.270; standard error of measurement: 0.973. Load distribution of the three factors had an eigenvalue greater than 1. Chi-square value was 9738.14 (p < 0.001). Correlations with the three factors were significant between Factor 1 and the other two: r = -0.634 (Factor 2) and r = -0.191 (Factor 1). Foot Function Index (Taiwan Version), Short-Form 12 (Version 2) and EuroQol-5D were used for criterion validity. Factors 1 and 2 showed significant correlation with 15/16 and 14/16 scales and subscales, respectively. Foot Function Index Chinese version psychometric characteristics were good to excellent. Chinese researchers and clinicians may use this tool for foot and ankle assessment and monitoring. Implications for rehabilitation A cross-cultural adaptation of the FFI has been done from original version to Chinese. Consistent results and satisfactory psychometric properties of the Foot Function Index Chinese version have been reported. For Chinese speaking researcher and clinician FFI-Ch could be used as a tool to assess patients with foot disease.

  5. RFI in hybrid loops - Simulation and experimental results.

    NASA Technical Reports Server (NTRS)

    Ziemer, R. E.; Nelson, D. R.; Raghavan, H. R.

    1972-01-01

    A digital simulation of an imperfect second-order hybrid phase-locked loop (HPLL) operating in radio frequency interference (RFI) is described. Its performance is characterized in terms of phase error variance and phase error probability density function (PDF). Monte-Carlo simulation is used to show that the HPLL can be superior to the conventional phase-locked loops in RFI backgrounds when minimum phase error variance is the goodness criterion. Similar experimentally obtained data are given in support of the simulation data.

  6. Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array

    NASA Astrophysics Data System (ADS)

    Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.

    2018-01-01

    The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.

  7. A passivity criterion for sampled-data bilateral teleoperation systems.

    PubMed

    Jazayeri, Ali; Tavakoli, Mahdi

    2013-01-01

    A teleoperation system consists of a teleoperator, a human operator, and a remote environment. Conditions involving system and controller parameters that ensure the teleoperator passivity can serve as control design guidelines to attain maximum teleoperation transparency while maintaining system stability. In this paper, sufficient conditions for teleoperator passivity are derived for when position error-based controllers are implemented in discrete-time. This new analysis is necessary because discretization causes energy leaks and does not necessarily preserve the passivity of the system. The proposed criterion for sampled-data teleoperator passivity imposes lower bounds on the teleoperator's robots dampings, an upper bound on the sampling time, and bounds on the control gains. The criterion is verified through simulations and experiments.

  8. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  9. POOLMS: A computer program for fitting and model selection for two level factorial replication-free experiments

    NASA Technical Reports Server (NTRS)

    Amling, G. E.; Holms, A. G.

    1973-01-01

    A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.

  10. The Development of an Instrument for Measuring Healing

    PubMed Central

    Meza, James Peter; Fahoome, Gail F.

    2008-01-01

    PURPOSE Our lack of ability to measure healing attributes impairs our ability to research the topic. The specific aim of this project is to describe the psychological and social construct of healing and to create a valid and reliable measurement scale for attributes of healing. METHODS A content expert conducted a domain analysis examining the existing literature of midrange theories of healing. Theme saturation of content sampling was ensured by brainstorming more than 220 potential items. Selection of items was sequential: pile sorting and data reduction, with factor analysis of a mailed 54-item questionnaire. Criterion validity (convergent and divergent) and temporal reliability were established using a second mailing of the development version of the instrument. Construct validity was judged with structural equation modeling for goodness of fit. RESULTS Cronbach’s α of the original questionnaire was .869 and the final scale was .862. The test-retest reliability was .849. Eigenvalues for the 2 factors were 8 and 4, respectively. Divergent and convergent validity using the Spann-Fischer Codependency Scale and SF-36 mental health and emotional subscales were consistent with predictions. The root mean square error of approximation was 0.066 and Bentler’s Comparative Fit Index was 0.871. Root mean square residual was 0.102. CONCLUSIONS We developed a valid and reliable measurement scale for attributes of healing, which we named the Self-Integration Scale v 2.1. By creating a new variable, new areas of research in humanistic health care are possible. PMID:18626036

  11. MERGANSER: an empirical model to predict fish and loon mercury in New England lakes.

    PubMed

    Shanley, James B; Moore, Richard; Smith, Richard A; Miller, Eric K; Simcox, Alison; Kamman, Neil; Nacci, Diane; Robinson, Keith; Johnston, John M; Hughes, Melissa M; Johnston, Craig; Evers, David; Williams, Kate; Graham, John; King, Susannah

    2012-04-17

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes. We modeled lakes larger than 8 ha (4404 lakes), using 3470 fish (12 species) and 253 loon Hg concentrations from 420 lakes. MERGANSER predictor variables included Hg deposition, watershed alkalinity, percent wetlands, percent forest canopy, percent agriculture, drainage area, population density, mean annual air temperature, and watershed slope. The model returns fish or loon Hg for user-entered species and fish length. MERGANSER explained 63% of the variance in fish and loon Hg concentrations. MERGANSER predicted that 32-cm smallmouth bass had a median Hg concentration of 0.53 μg g(-1) (root-mean-square error 0.27 μg g(-1)) and exceeded EPA's recommended fish Hg criterion of 0.3 μg g(-1) in 90% of New England lakes. Common loon had a median Hg concentration of 1.07 μg g(-1) and was in the moderate or higher risk category of >1 μg g(-1) Hg in 58% of New England lakes. MERGANSER can be applied to target fish advisories to specific unmonitored lakes, and for scenario evaluation, such as the effect of changes in Hg deposition, land use, or warmer climate on fish and loon mercury.

  12. MERGANSER: an empirical model to predict fish and loon mercury in New England lakes

    USGS Publications Warehouse

    Shanley, James B.; Moore, Richard; Smith, Richard A.; Miller, Eric K.; Simcox, Alison; Kamman, Neil; Nacci, Diane; Robinson, Keith; Johnston, John M.; Hughes, Melissa M.; Johnston, Craig; Evers, David; Williams, Kate; Graham, John; King, Susannah

    2012-01-01

    MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes. We modeled lakes larger than 8 ha (4404 lakes), using 3470 fish (12 species) and 253 loon Hg concentrations from 420 lakes. MERGANSER predictor variables included Hg deposition, watershed alkalinity, percent wetlands, percent forest canopy, percent agriculture, drainage area, population density, mean annual air temperature, and watershed slope. The model returns fish or loon Hg for user-entered species and fish length. MERGANSER explained 63% of the variance in fish and loon Hg concentrations. MERGANSER predicted that 32-cm smallmouth bass had a median Hg concentration of 0.53 μg g-1 (root-mean-square error 0.27 μg g-1) and exceeded EPA's recommended fish Hg criterion of 0.3 μg g-1 in 90% of New England lakes. Common loon had a median Hg concentration of 1.07 μg g-1 and was in the moderate or higher risk category of >1 μg g-1 Hg in 58% of New England lakes. MERGANSER can be applied to target fish advisories to specific unmonitored lakes, and for scenario evaluation, such as the effect of changes in Hg deposition, land use, or warmer climate on fish and loon mercury.

  13. A suggestion for computing objective function in model calibration

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang

    2014-01-01

    A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).

  14. [Formula: see text]-regularized recursive total least squares based sparse system identification for the error-in-variables.

    PubMed

    Lim, Jun-Seok; Pang, Hee-Suk

    2016-01-01

    In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.

  15. An algorithm for propagating the square-root covariance matrix in triangular form

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Choe, C. Y.

    1976-01-01

    A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.

  16. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  17. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  19. Curve Fitting via the Criterion of Least Squares. Applications of Algebra and Elementary Calculus to Curve Fitting. [and] Linear Programming in Two Dimensions: I. Applications of High School Algebra to Operations Research. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 321, 453.

    ERIC Educational Resources Information Center

    Alexander, John W., Jr.; Rosenberg, Nancy S.

    This document consists of two modules. The first of these views applications of algebra and elementary calculus to curve fitting. The user is provided with information on how to: 1) construct scatter diagrams; 2) choose an appropriate function to fit specific data; 3) understand the underlying theory of least squares; 4) use a computer program to…

  20. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.

  1. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting?

    PubMed Central

    McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.

    2016-01-01

    The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821

  2. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  3. Minimizing false positive error with multiple performance validity tests: response to Bilder, Sugar, and Hellemann (2014 this issue).

    PubMed

    Larrabee, Glenn J

    2014-01-01

    Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.

  4. The test-retest reliability and criterion validity of a high-intensity, netball-specific circuit test: The Net-Test.

    PubMed

    Mungovan, Sean F; Peralta, Paula J; Gass, Gregory C; Scanlan, Aaron T

    2018-04-12

    To examine the test-retest reliability and criterion validity of a high-intensity, netball-specific fitness test. Repeated measures, within-subject design. Eighteen female netball players competing in an international competition completed a trial of the Net-Test, which consists of 14 timed netball-specific movements. Players also completed a series of netball-relevant criterion fitness tests. Ten players completed an additional Net-Test trial one week later to assess test-retest reliability using intraclass correlation coefficient (ICC), typical error of measurement (TEM), and coefficient of variation (CV). The typical error of estimate expressed as CV and Pearson correlations were calculated between each criterion test and Net-Test performance to assess criterion validity. Five movements during the Net-Test displayed moderate ICC (0.84-0.90) and two movements displayed high ICC (0.91-0.93). Seven movements and heart rate taken during the Net-Test held low CV (<5%) with values ranging from 1.7 to 9.5% across measures. Total time (41.63±2.05s) during the Net-Test possessed low CV and significant (p<0.05) correlations with 10m sprint time (1.98±0.12s; CV=4.4%, r=0.72), 20m sprint time (3.38±0.19s; CV=3.9%, r=0.79), 505 Change-of-Direction time (2.47±0.08s; CV=2.0%, r=0.80); and maximum oxygen uptake (46.59±2.58 mLkg -1 min -1 ; CV=4.5%, r=-0.66). The Net-Test possesses acceptable reliability for the assessment of netball fitness. Further, the high criterion validity for the Net-Test suggests a range of important netball-specific fitness elements are assessed in combination. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  5. Application of Rapid Visco Analyser (RVA) viscograms and chemometrics for maize hardness characterisation.

    PubMed

    Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena

    2015-04-15

    It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. [Establishing and applying of autoregressive integrated moving average model to predict the incidence rate of dysentery in Shanghai].

    PubMed

    Li, Jian; Wu, Huan-Yu; Li, Yan-Ting; Jin, Hui-Ming; Gu, Bao-Ke; Yuan, Zheng-An

    2010-01-01

    To explore the feasibility of establishing and applying of autoregressive integrated moving average (ARIMA) model to predict the incidence rate of dysentery in Shanghai, so as to provide the theoretical basis for prevention and control of dysentery. ARIMA model was established based on the monthly incidence rate of dysentery of Shanghai from 1990 to 2007. The parameters of model were estimated through unconditional least squares method, the structure was determined according to criteria of residual un-correlation and conclusion, and the model goodness-of-fit was determined through Akaike information criterion (AIC) and Schwarz Bayesian criterion (SBC). The constructed optimal model was applied to predict the incidence rate of dysentery of Shanghai in 2008 and evaluate the validity of model through comparing the difference of predicted incidence rate and actual one. The incidence rate of dysentery in 2010 was predicted by ARIMA model based on the incidence rate from January 1990 to June 2009. The model ARIMA (1, 1, 1) (0, 1, 2)(12) had a good fitness to the incidence rate with both autoregressive coefficient (AR1 = 0.443) during the past time series, moving average coefficient (MA1 = 0.806) and seasonal moving average coefficient (SMA1 = 0.543, SMA2 = 0.321) being statistically significant (P < 0.01). AIC and SBC were 2.878 and 16.131 respectively and predicting error was white noise. The mathematic function was (1-0.443B) (1-B) (1-B(12))Z(t) = (1-0.806B) (1-0.543B(12)) (1-0.321B(2) x 12) micro(t). The predicted incidence rate in 2008 was consistent with the actual one, with the relative error of 6.78%. The predicted incidence rate of dysentery in 2010 based on the incidence rate from January 1990 to June 2009 would be 9.390 per 100 thousand. ARIMA model can be used to fit the changes of incidence rate of dysentery and to forecast the future incidence rate in Shanghai. It is a predicted model of high precision for short-time forecast.

  7. The Relationship between Mean Square Differences and Standard Error of Measurement: Comment on Barchard (2012)

    ERIC Educational Resources Information Center

    Pan, Tianshu; Yin, Yue

    2012-01-01

    In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…

  8. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  9. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  10. [Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].

    PubMed

    Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling

    2013-12-01

    Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.

  11. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information; with a section on theory and application of generalized least squares

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1987-01-01

    This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.

  12. An Application of Interactive Computer Graphics to the Study of Inferential Statistics and the General Linear Model

    DTIC Science & Technology

    1991-09-01

    matrix, the Regression Sum of Squares (SSR) and Error Sum of Squares (SSE) are also displayed as a percentage of the Total Sum of Squares ( SSTO ...vector when the student compares the SSR to the SSE. In addition to the plot, the actual values of SSR, SSE, and SSTO are also provided. Figure 3 gives the...Es ainSpace = E 3 Error- Eor Space =n t! L . Pro~cio q Yonto Pro~rct on of Y onto the simaton, pac ror Space SSR SSEL0.20 IV = 14,1 +IErrorI 2 SSTO

  13. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  14. Nuclear norm-based 2-DPCA for extracting features from images.

    PubMed

    Zhang, Fanlong; Yang, Jian; Qian, Jianjun; Xu, Yong

    2015-10-01

    The 2-D principal component analysis (2-DPCA) is a widely used method for image feature extraction. However, it can be equivalently implemented via image-row-based principal component analysis. This paper presents a structured 2-D method called nuclear norm-based 2-DPCA (N-2-DPCA), which uses a nuclear norm-based reconstruction error criterion. The nuclear norm is a matrix norm, which can provide a structured 2-D characterization for the reconstruction error image. The reconstruction error criterion is minimized by converting the nuclear norm-based optimization problem into a series of F-norm-based optimization problems. In addition, N-2-DPCA is extended to a bilateral projection-based N-2-DPCA (N-B2-DPCA). The virtue of N-B2-DPCA over N-2-DPCA is that an image can be represented with fewer coefficients. N-2-DPCA and N-B2-DPCA are applied to face recognition and reconstruction and evaluated using the Extended Yale B, CMU PIE, FRGC, and AR databases. Experimental results demonstrate the effectiveness of the proposed methods.

  15. Inference regarding multiple structural changes in linear models with endogenous regressors☆

    PubMed Central

    Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021

  16. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  17. Identification and compensation of the temperature influences in a miniature three-axial accelerometer based on the least squares method

    NASA Astrophysics Data System (ADS)

    Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae

    2017-06-01

    The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.

  18. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  19. Accuracy of clinical observations of push-off during gait after stroke.

    PubMed

    McGinley, Jennifer L; Morris, Meg E; Greenwood, Ken M; Goldie, Patricia A; Olney, Sandra J

    2006-06-01

    To determine the accuracy (criterion-related validity) of real-time clinical observations of push-off in gait after stroke. Criterion-related validity study of gait observations. Rehabilitation hospital in Australia. Eleven participants with stroke and 8 treating physical therapists. Not applicable. Pearson product-moment correlation between physical therapists' observations of push-off during gait and criterion measures of peak ankle power generation from a 3-dimensional motion analysis system. A high correlation was obtained between the observational ratings and the measurements of peak ankle power generation (Pearson r =.98). The standard error of estimation of ankle power generation was .32W/kg. Physical therapists can make accurate real-time clinical observations of push-off during gait following stroke.

  20. Use of nonlinear models for describing scrotal circumference growth in Guzerat bulls raised under grazing conditions.

    PubMed

    Loaiza-Echeverri, A M; Bergmann, J A G; Toral, F L B; Osorio, J P; Carmo, A S; Mendonça, L F; Moustacas, V S; Henry, M

    2013-03-15

    The objective was to use various nonlinear models to describe scrotal circumference (SC) growth in Guzerat bulls on three farms in the state of Minas Gerais, Brazil. The nonlinear models were: Brody, Logistic, Gompertz, Richards, Von Bertalanffy, and Tanaka, where parameter A is the estimated testis size at maturity, B is the integration constant, k is a maturating index and, for the Richards and Tanaka models, m determines the inflection point. In Tanaka, A is an indefinite size of the testis, and B and k adjust the shape and inclination of the curve. A total of 7410 SC records were obtained every 3 months from 1034 bulls with ages varying between 2 and 69 months (<240 days of age = 159; 241-365 days = 451; 366-550 days = 1443; 551-730 days = 1705; and >731 days = 3652 SC measurements). Goodness of fit was evaluated by coefficients of determination (R(2)), error sum of squares, average prediction error (APE), and mean absolute deviation. The Richards model did not reach the convergence criterion. The R(2) were similar for all models (0.68-0.69). The error sum of squares was lowest for the Tanaka model. All models fit the SC data poorly in the early and late periods. Logistic was the model which best estimated SC in the early phase (based on APE and mean absolute deviation). The Tanaka and Logistic models had the lowest APE between 300 and 1600 days of age. The Logistic model was chosen for analysis of the environmental influence on parameters A and k. Based on absolute growth rate, SC increased from 0.019 cm/d, peaking at 0.025 cm/d between 318 and 435 days of age. Farm, year, and season of birth significantly affected size of adult SC and SC growth rate. An increase in SC adult size (parameter A) was accompanied by decreased SC growth rate (parameter k). In conclusion, SC growth in Guzerat bulls was characterized by an accelerated growth phase, followed by decreased growth; this was best represented by the Logistic model. The inflection point occurred at approximately 376 days of age (mean SC of 17.9 cm). We inferred that early selection of testicular size might result in smaller testes at maturity. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Lifesource XL-18 pedometer for measuring steps under controlled and free-living conditions.

    PubMed

    Liu, Sam; Brooks, Dina; Thomas, Scott; Eysenbach, Gunther; Nolan, Robert Peter

    2015-01-01

    The primary aim was to examine the criterion and construct validity and test-retest reliability of the Lifesource XL-18 pedometer (A&D Medical, Toronto, ON, Canada) for measuring steps under controlled and free-living activities. The influence of body mass index, waist size and walking speed on the criterion validity of XL-18 was also explored. Forty adults (35-74 years) performed a 6-min walk test in the controlled condition, and the criterion validity of XL-18 was assessed by comparing it to steps counted manually. Thirty-five adults participated in the free-living condition and the construct validity of XL-18 was assessed by comparing it to Yamax SW-200 (YAMAX Health & Sports, Inc., San Antonio, TX, USA). During the controlled condition, XL-18 did not significantly differ from criterion (P > 0.05) and no systematic error was found using Bland-Altman analysis. The accuracy of XL-18 decreased with slower walking speed (P = 0.001). During the free-living condition, Bland-Altman analysis revealed that XL-18 overestimated daily steps by 327 ± 118 than Yamax (P = 0.004). However, the absolute percent error (APE) (6.5 ± 0.58%) was still within an acceptable range. XL-18 did not differ statistically between pant pockets. XL-18 is suitable for measuring steps in controlled and free-living conditions. However, caution may be required when interpreting the steps recorded under slower speeds and free-living conditions.

  2. Evaluation of the kinetic energy of the torso by magneto-inertial measurement unit during the sit-to-stand movement.

    PubMed

    Lepetit, Kevin; Ben Mansour, Khalil; Boudaoud, Sofiane; Kinugawa-Bourron, Kiyoka; Marin, Frédéric

    2018-01-23

    Sit-to-stand tests are used in geriatrics as a qualitative issue in order to evaluate motor control and stability. In terms of measured indicators, it is traditionally the duration of the task that is reported, however it appears that the use of the kinetic energy as a new quantitative criterion allows getting a better understanding of musculoskeletal deficits of elderly subjects. The aim of this study was to determine the feasibility to obtain the measure of kinetic energy using magneto-inertial measurement units (MIMU) during sit-to-stand movements at various paces. 26 healthy subjects contributed to this investigation. Measured results were compared to a marker-based motion capture using the correlation coefficient and the normalized root mean square error (nRMSE). nRMSE were below 10% and correlation coefficients were over 0.97. In addition, errors on the mean kinetic energy were also investigated using Bland-Altman 95% limits of agreement (0.63 J-0.77 J), RMSE (0.29 J-0.38 J) and correlation coefficient (0.96-0.98). The results obtained highlighted that the method based on MIMU data could be an alternative to optoelectronic data acquisition to assess the kinetic energy of the torso during the sit-to-stand test, suggesting this method as being a promising alternative to determine kinetic energy during the sit-to-stand movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. An efficient variable projection formulation for separable nonlinear least squares problems.

    PubMed

    Gan, Min; Li, Han-Xiong

    2014-05-01

    We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.

  4. Intrinsic Raman spectroscopy for quantitative biological spectroscopy Part II

    PubMed Central

    Bechtel, Kate L.; Shih, Wei-Chuan; Feld, Michael S.

    2009-01-01

    We demonstrate the effectiveness of intrinsic Raman spectroscopy (IRS) at reducing errors caused by absorption and scattering. Physical tissue models, solutions of varying absorption and scattering coefficients with known concentrations of Raman scatterers, are studied. We show significant improvement in prediction error by implementing IRS to predict concentrations of Raman scatterers using both ordinary least squares regression (OLS) and partial least squares regression (PLS). In particular, we show that IRS provides a robust calibration model that does not increase in error when applied to samples with optical properties outside the range of calibration. PMID:18711512

  5. [Integral assessment of learning subjects difficulties].

    PubMed

    Grebniak, N P; Shchudro, S A

    2010-01-01

    The integral criterion for subject difficulties in senior classes is substantiated in terms of progress in studies, variation coefficient, and subjective and expert appraisals of the difficulty of subjects. The compiled regression models adequately determine the difficulty of academic subjects. According to the root-mean-square deviation, all subjects were found to have 3 degrees of difficulty.

  6. Modeling Group Differences in OLS and Orthogonal Regression: Implications for Differential Validity Studies

    ERIC Educational Resources Information Center

    Kane, Michael T.; Mroch, Andrew A.

    2010-01-01

    In evaluating the relationship between two measures across different groups (i.e., in evaluating "differential validity") it is necessary to examine differences in correlation coefficients and in regression lines. Ordinary least squares (OLS) regression is the standard method for fitting lines to data, but its criterion for optimal fit…

  7. Experience with k-epsilon turbulence models for heat transfer computations in rotating

    NASA Technical Reports Server (NTRS)

    Tekriwal, Prabbat

    1995-01-01

    This viewgraph presentation discusses geometry and flow configuration, effect of y+ on heat transfer computations, standard and extended k-epsilon turbulence model results with wall function, low-Re model results (the Lam-Bremhorst model without wall function), a criterion for flow reversal in a radially rotating square duct, and a summary.

  8. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  9. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  10. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  11. 40 CFR 91.321 - NDIR analyzer calibration.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...

  12. Error Analyses of the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  13. Optimized square Fresnel zone plates for microoptics applications

    NASA Astrophysics Data System (ADS)

    Rico-García, José María; Salgado-Remacha, Francisco Javier; Sanchez-Brea, Luis Miguel; Alda, Javier

    2009-06-01

    Polygonal Fresnel zone plates with a low number of sides have deserved attention in micro and nanoptics, because they can be straightforwardly integrated in photonic devices, and, at the same time, they represent a balance between the high-focusing performance of a circular zone plate and the easiness of fabrication at micro and nano-scales of polygons. Among them, the most representative family are Square Fresnel Zone Plates (SFZP). In this work, we propose two different customized designs of SFZP for optical wavelengths. Both designs are based on the optimization of a SFZP to perform as close as possible as a usual Fresnel Zone Plate. In the first case, the criterion followed to compute it is the minimization of the difference between the area covered by the angular sector of the zone of the corresponding circular plate and the one covered by the polygon traced on the former. Such a requirement leads to a customized polygon-like Fresnel zone. The simplest one is a square zone with a pattern of phases repeating each five zones. On the other hand, an alternative SFZP can be designed guided by the same criterion but with a new restriction. In this case, the distance between the borders of different zones remains unaltered. A comparison between the two lenses is carried out. The irradiance at focus is computed for both and suitable merit figures are defined to account for the difference between them.

  14. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  15. Combinatorics of least-squares trees.

    PubMed

    Mihaescu, Radu; Pachter, Lior

    2008-09-09

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  16. Luminaire layout: Design and implementation

    NASA Technical Reports Server (NTRS)

    Both, A. J.

    1994-01-01

    The information contained in this report was presented during the discussion regarding guidelines for PAR uniformity in greenhouses. The data shows a lighting uniformity analysis in a research greenhouse for rose production at the Cornell University campus. The luminaire layout was designed using the computer program Lumen-Micro. After implementation of the design, accurate measurements were taken in the greenhouse and the uniformity analysis for both the design and implementation were compared. A study of several supplemental lighting installations resulted in the following recommendations: include only the actual growing area in the lighting uniformity analysis; for growing areas up to 20 square meters, take four measurements per square meter; for growing areas above 20 square meters, take one measurement per square meter; use one of the uniformity criteria and frequency graphs to compare lighting uniformity amongst designs; and design for uniformity criterion of a least 0.75 and the fraction within +/- 15% of the average PAR value should be close to one.

  17. Theoretical and experimental studies of error in square-law detector circuits

    NASA Technical Reports Server (NTRS)

    Stanley, W. D.; Hearn, C. P.; Williams, J. B.

    1984-01-01

    Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.

  18. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  19. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  20. Clinical anthropometrics and body composition from 3D whole-body surface scans.

    PubMed

    Ng, B K; Hinton, B J; Fan, B; Kanaya, A M; Shepherd, J A

    2016-11-01

    Obesity is a significant worldwide epidemic that necessitates accessible tools for robust body composition analysis. We investigated whether widely available 3D body surface scanners can provide clinically relevant direct anthropometrics (circumferences, areas and volumes) and body composition estimates (regional fat/lean masses). Thirty-nine healthy adults stratified by age, sex and body mass index (BMI) underwent whole-body 3D scans, dual energy X-ray absorptiometry (DXA), air displacement plethysmography and tape measurements. Linear regressions were performed to assess agreement between 3D measurements and criterion methods. Linear models were derived to predict DXA body composition from 3D scan measurements. Thirty-seven external fitness center users underwent 3D scans and bioelectrical impedance analysis for model validation. 3D body scan measurements correlated strongly to criterion methods: waist circumference R 2 =0.95, hip circumference R 2 =0.92, surface area R 2 =0.97 and volume R 2 =0.99. However, systematic differences were observed for each measure due to discrepancies in landmark positioning. Predictive body composition equations showed strong agreement for whole body (fat mass R 2 =0.95, root mean square error (RMSE)=2.4 kg; fat-free mass R 2 =0.96, RMSE=2.2 kg) and arms, legs and trunk (R 2 =0.79-0.94, RMSE=0.5-1.7 kg). Visceral fat prediction showed moderate agreement (R 2 =0.75, RMSE=0.11 kg). 3D surface scanners offer precise and stable automated measurements of body shape and composition. Software updates may be needed to resolve measurement biases resulting from landmark positioning discrepancies. Further studies are justified to elucidate relationships between body shape, composition and metabolic health across sex, age, BMI and ethnicity groups, as well as in those with metabolic disorders.

  1. Performance analysis of an OAM multiplexing-based MIMO FSO system over atmospheric turbulence using space-time coding with channel estimation.

    PubMed

    Zhang, Yan; Wang, Ping; Guo, Lixin; Wang, Wei; Tian, Hongxin

    2017-08-21

    The average bit error rate (ABER) performance of an orbital angular momentum (OAM) multiplexing-based free-space optical (FSO) system with multiple-input multiple-output (MIMO) architecture has been investigated over atmospheric turbulence considering channel estimation and space-time coding. The impact of different types of space-time coding, modulation orders, turbulence strengths, receive antenna numbers on the transmission performance of this OAM-FSO system is also taken into account. On the basis of the proposed system model, the analytical expressions of the received signals carried by the k-th OAM mode of the n-th receive antenna for the vertical bell labs layered space-time (V-Blast) and space-time block codes (STBC) are derived, respectively. With the help of channel estimator carrying out with least square (LS) algorithm, the zero-forcing criterion with ordered successive interference cancellation criterion (ZF-OSIC) equalizer of V-Blast scheme and Alamouti decoder of STBC scheme are adopted to mitigate the performance degradation induced by the atmospheric turbulence. The results show that the ABERs obtained by channel estimation have excellent agreement with those of turbulence phase screen simulations. The ABERs of this OAM multiplexing-based MIMO system deteriorate with the increase of turbulence strengths. And both V-Blast and STBC schemes can significantly improve the system performance by mitigating the distortions of atmospheric turbulence as well as additive white Gaussian noise (AWGN). In addition, the ABER performances of both space-time coding schemes can be further enhanced by increasing the number of receive antennas for the diversity gain and STBC outperforms V-Blast in this system for data recovery. This work is beneficial to the OAM FSO system design.

  2. Estimating catchment-scale groundwater dynamics from recession analysis - enhanced constraining of hydrological models

    NASA Astrophysics Data System (ADS)

    Skaugen, Thomas; Mengistu, Zelalem

    2016-12-01

    In this study, we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of catchment-scale storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff. The parameters are hence estimated prior to model calibration against runoff. The new storage routine is implemented in the parameter parsimonious distance distribution dynamics (DDD) model and has been tested for 73 catchments in Norway of varying size, mean elevation and landscape type. Runoff simulations for the 73 catchments from two model structures (DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage) were compared. Little loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe efficiency criterion of 0.73 was obtained using the new estimated storage routine compared with 0.75 using calibrated storage routine. The average Kling-Gupta efficiency criterion was 0.80 and 0.81 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recession characteristics was reduced by almost 50 % using the new storage routine. The parameters of the proposed storage routine are found to be significantly correlated to catchment characteristics, which is potentially useful for predictions in ungauged basins.

  3. Can scrotal circumference-based selection discard bulls with good productive and reproductive potential?

    PubMed Central

    Villadiego, Faider Alberto Castaño; Camilo, Breno Soares; León, Victor Gomez; Peixoto, Thiago; Díaz, Edgar; Okano, Denise; Maitan, Paula; Lima, Daniel; Guimarães, Simone Facioni; Siqueira, Jeanne Broch; Pinho, Rogério

    2018-01-01

    Nonlinear mixed models were used to describe longitudinal scrotal circumference (SC) measurements of Nellore bulls. Models comparisons were based on Akaike’s information criterion, Bayesian information criterion, error sum of squares, adjusted R2 and percentage of convergence. Sequentially, the best model was used to compare the SC growth curve in bulls divergently classified according to SC at 18–21 months of age. For this, bulls were classified into five groups: SC < 28cm; 28cm ≤ SC < 30cm, 30cm ≤ SC < 32cm, 32cm ≤ SC < 34cm and SC ≥ 34cm. Michaelis-Menten model showed the best fit according to the mentioned criteria. In this model, β1 is the asymptotic SC value and β2 represents the time to half-final growth and may be related to sexual precocity. Parameters of the individual estimated growth curves were used to create a new dataset to evaluate the effect of the classification, farms, and year of birth on β1 and β2 parameters. Bulls of the largest SC group presented a larger predicted SC along all analyzed periods; nevertheless, smaller SC group showed predicted SC similar to intermediate SC groups (28cm ≤ SC < 32cm), around 1200 days of age. In this context, bulls classified as improper for reproduction at 18–21 months old can reach a similar condition to those considered as good condition. In terms of classification at 18–21 months, asymptotic SC was similar among groups, farms and years; however, β2 differed among groups indicating that differences in growth curves are related to sexual precocity. In summary, it seems that selection based on SC at too early ages may lead to discard bulls with suitable reproductive potential. PMID:29494597

  4. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  5. Human striatal activation during adjustment of the response criterion in visual word recognition.

    PubMed

    Kuchinke, Lars; Hofmann, Markus J; Jacobs, Arthur M; Frühholz, Sascha; Tamm, Sascha; Herrmann, Manfred

    2011-02-01

    Results of recent computational modelling studies suggest that a general function of the striatum in human cognition is related to shifting decision criteria in selection processes. We used functional magnetic resonance imaging (fMRI) in 21 healthy subjects to examine the hemodynamic responses when subjects shift their response criterion on a trial-by-trial basis in the lexical decision paradigm. Trial-by-trial criterion setting is obtained when subjects respond faster in trials following a word trial than in trials following nonword trials - irrespective of the lexicality of the current trial. Since selection demands are equally high in the current trials, we expected to observe neural activations that are related to response criterion shifting. The behavioural data show sequential effects with faster responses in trials following word trials compared to trials following nonword trials, suggesting that subjects shifted their response criterion on a trial-by-trial basis. The neural responses revealed a signal increase in the striatum only in trials following word trials. This striatal activation is therefore likely to be related to response criterion setting. It demonstrates a role of the striatum in shifting decision criteria in visual word recognition, which cannot be attributed to pure error-related processing or the selection of a preferred response. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  7. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  8. Responsiveness of performance-based outcome measures for mobility, balance, muscle strength and manual dexterity in adults with myotonic dystrophy type 1.

    PubMed

    Kierkegaard, Marie; Petitclerc, Émilie; Hébert, Luc J; Mathieu, Jean; Gagnon, Cynthia

    2018-02-28

    To assess changes and responsiveness in outcome measures of mobility, balance, muscle strength and manual dexterity in adults with myotonic dystrophy type 1. A 9-year longitudinal study conducted with 113 patients. The responsiveness of the Timed Up and Go test, Berg Balance Scale, quantitative muscle testing, grip and pinch-grip strength, and Purdue Pegboard Test was assessed using criterion and construct approaches. Patient-reported perceived changes (worse/stable) in balance, walking, lower-limb weakness, stair-climbing and hand weakness were used as criteria. Predefined hypotheses about expected area under the receiver operating characteristic curves (criterion approach) and correlations between relative changes (construct approach) were explored. The direction and magnitude of median changes in outcome measures corresponded with patient-reported changes. Median changes in the Timed Up and Go test, grip strength, pinch-grip strength and Purdue Pegboard Test did not, in general, exceed known measurement errors. Most criterion (72%) and construct (70%) approach hypotheses were supported. Promising responsiveness was found for outcome measures of mobility, balance and muscle strength. Grip strength and manual dexterity measures showed poorer responsiveness. The performance-based outcome measures captured changes over the 9-year period and responsiveness was promising. Knowledge of measurement errors is needed to interpret the meaning of these longitudinal changes.

  9. 16p11.2 Deletion mice display cognitive deficits in touchscreen learning and novelty recognition tasks.

    PubMed

    Yang, Mu; Lewis, Freeman C; Sarvi, Michael S; Foley, Gillian M; Crawley, Jacqueline N

    2015-12-01

    Chromosomal 16p11.2 deletion syndrome frequently presents with intellectual disabilities, speech delays, and autism. Here we investigated the Dolmetsch line of 16p11.2 heterozygous (+/-) mice on a range of cognitive tasks with different neuroanatomical substrates. Robust novel object recognition deficits were replicated in two cohorts of 16p11.2+/- mice, confirming previous findings. A similarly robust deficit in object location memory was discovered in +/-, indicating impaired spatial novelty recognition. Generalizability of novelty recognition deficits in +/- mice extended to preference for social novelty. Robust learning deficits and cognitive inflexibility were detected using Bussey-Saksida touchscreen operant chambers. During acquisition of pairwise visual discrimination, +/- mice required significantly more training trials to reach criterion than wild-type littermates (+/+), and made more errors and correction errors than +/+. In the reversal phase, all +/+ reached criterion, whereas most +/- failed to reach criterion by the 30-d cutoff. Contextual and cued fear conditioning were normal in +/-. These cognitive phenotypes may be relevant to some aspects of cognitive impairments in humans with 16p11.2 deletion, and support the use of 16p11.2+/- mice as a model system for discovering treatments for cognitive impairments in 16p11.2 deletion syndrome. © 2015 Yang et al.; Published by Cold Spring Harbor Laboratory Press.

  10. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P. D.; Marini, J. W.

    1977-01-01

    A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.

  11. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  12. Error analysis on squareness of multi-sensor integrated CMM for the multistep registration method

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Wang, Yiwen; Ye, Xiuling; Wang, Zhong; Fu, Luhua

    2018-01-01

    The multistep registration(MSR) method in [1] is to register two different classes of sensors deployed on z-arm of CMM(coordinate measuring machine): a video camera and a tactile probe sensor. In general, it is difficult to obtain a very precise registration result with a single common standard, instead, this method is achieved by measuring two different standards with a constant distance between them two which are fixed on a steel plate. Although many factors have been considered such as the measuring ability of sensors, the uncertainty of the machine and the number of data pairs, there is no exact analysis on the squareness between the x-axis and the y-axis on the xy plane. For this sake, error analysis on the squareness of multi-sensor integrated CMM for the multistep registration method will be made to examine the validation of the MSR method. Synthetic experiments on the squareness on the xy plane for the simplified MSR with an inclination rotation are simulated, which will lead to a regular result. Experiments have been carried out with the multi-standard device designed also in [1], meanwhile, inspections with the help of a laser interferometer on the xy plane have been carried out. The final results are conformed to the simulations, and the squareness errors of the MSR method are also similar to the results of interferometer. In other word, the MSR can also adopted/utilized to verify the squareness of a CMM.

  13. A General Linear Model Approach to Adjusting the Cumulative GPA.

    ERIC Educational Resources Information Center

    Young, John W.

    A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…

  14. Examining the Reliability of Interval Level Data Using Root Mean Square Differences and Concordance Correlation Coefficients

    ERIC Educational Resources Information Center

    Barchard, Kimberly A.

    2012-01-01

    This article introduces new statistics for evaluating score consistency. Psychologists usually use correlations to measure the degree of linear relationship between 2 sets of scores, ignoring differences in means and standard deviations. In medicine, biology, chemistry, and physics, a more stringent criterion is often used: the extent to which…

  15. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  16. Measuring physical activity in young people with cerebral palsy: validity and reliability of the ActivPAL™ monitor.

    PubMed

    Bania, Theofani

    2014-09-01

    We determined the criterion validity and the retest reliability of the ΑctivPAL™ monitor in young people with diplegic cerebral palsy (CP). Activity monitor data were compared with the criterion of video recording for 10 participants. For the retest reliability, activity monitor data were collected from 24 participants on two occasions. Participants had to have diplegic CP and be between 14 and 22 years of age. They also had to be of Gross Motor Function Classification System level II or III. Outcomes were time spent in standing, number of steps (physical activity) and time spent in sitting (sedentary behaviour). For criterion validity, coefficients of determination were all high (r(2)  ≥ 0.96), and limits of group agreement were relatively narrow, but limits of agreement for individuals were narrow only for number of steps (≥5.5%). Relative reliability was high for number of steps (intraclass correlation coefficient = 0.87) and moderate for time spent in sitting and lying, and time spent in standing (intraclass correlation coefficients = 0.60-0.66). For groups, changes of up to 7% could be due to measurement error with 95% confidence, but for individuals, changes as high as 68% could be due to measurement error. The results support the criterion validity and the retest reliability of the ActivPAL™ to measure physical activity and sedentary behaviour in groups of young people with diplegic CP but not in individuals. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Statistical analysis of aerosols over the Gangetic-Himalayan region using ARIMA model based on long-term MODIS observations

    NASA Astrophysics Data System (ADS)

    Soni, Kirti; Kapoor, Sangeeta; Parmar, Kulwinder Singh; Kaskaoutis, Dimitris G.

    2014-11-01

    Long-term observations and modeling of aerosol loading over the Indo-Gangetic plains (IGP), the Indian desert region and Himalayan slopes are analyzed in the present study. The Box-Jenkins popular ARIMA (AutoRegressive Integrated Moving Average) model was applied to simulate the monthly-mean Terra MODIS (MODerate Resolution Imaging Spectroradiometer) Aerosol Optical Depth (AOD550 nm) over eight sites in the region covering a period of about 13 years (March 2000-May 2012). The autocorrelation structure has been analyzed indicating a deterministic pattern in the time series that it regains its structure every 24 month period. The ARIMA models namely ARIMA(2,0,12), ARIMA(1,0,6), ARIMA(3,0,0), ARIMA(2,0,13) ARIMA(0,0,12), ARIMA(2,0,2), ARIMA(1,0,12) and ARIMA(0,0,1) have been developed as the most suitable for simulating and forecasting the monthly-mean AOD over the eight selected locations. The Stationary R-squared, R-squared, Root Mean Square Error (RMSE) and Normalized BIC (Bayesian Information Criterion) are used to test the validity and applicability of the developed ARIMA models revealing adequate accuracy in the model performance. The values of Hurst Exponent, Fractal Dimension and Predictability Index for AODs are about 0.5, 1.5 and 0, respectively, suggesting that the AODs in all sites follow the Brownian time-series motion (true random walk). High AOD values (> 0.7) are observed over the industrialized and densely-populated IGP sites associated with low ones over the foothills/slopes of the Himalayas. The trends in AOD during the ~ 13-year period differentiates depending on season and site. During post-monsoon and winter accumulation of aerosols and increasing trends are shown over IGP sites, which are neutralized in pre-monsoon and become slightly negative in monsoon. The AOD over the Himalayan sites does not exhibit any significant trend and seems to be practically unaffected by the aerosol built-up over IGP.

  18. Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.

    ERIC Educational Resources Information Center

    Poole, Keith T.

    1990-01-01

    A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…

  19. Attenuation of the Squared Canonical Correlation Coefficient under Varying Estimates of Score Reliability

    ERIC Educational Resources Information Center

    Wilson, Celia M.

    2010-01-01

    Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…

  20. Validity of the Brunel Mood Scale for use With Malaysian Athletes.

    PubMed

    Lan, Mohamad Faizal; Lane, Andrew M; Roy, Jolly; Hanin, Nik Azma

    2012-01-01

    The aim of the present study was to investigate the factorial validity of the Brunel Mood Scale for use with Malaysian athletes. Athletes (N = 1485 athletes) competing at the Malaysian Games completed the Brunel of Mood Scale (BRUMS). Confirmatory Factor Analysis (CFA) results indicated a Confirmatory Fit Index (CFI) of .90 and Root Mean Squared Error of Approximation (RMSEA) was 0.05. The CFI was below the 0.95 criterion for acceptability and the RMSEA value was within the limits for acceptability suggested by Hu and Bentler, 1999. We suggest that results provide some support for validity of the BRUMS for use with Malaysian athletes. Given the large sample size used in the present study, descriptive statistics could be used as normative data for Malaysian athletes. Key pointsFindings from the present study lend support to the validity of the BRUMS for use with Malaysian athletes.Given the size of the sample used in the present study, we suggest descriptive data be used as the normative data for researchers using the scale with Malaysian athletes.It is suggested that future research investigate the effects of cultural differences on emotional states experienced by athletes before, during and post-competition.

  1. Validity of the Brunel Mood Scale for use With Malaysian Athletes

    PubMed Central

    Lan, Mohamad Faizal; Lane, Andrew M.; Roy, Jolly; Hanin, Nik Azma

    2012-01-01

    The aim of the present study was to investigate the factorial validity of the Brunel Mood Scale for use with Malaysian athletes. Athletes (N = 1485 athletes) competing at the Malaysian Games completed the Brunel of Mood Scale (BRUMS). Confirmatory Factor Analysis (CFA) results indicated a Confirmatory Fit Index (CFI) of .90 and Root Mean Squared Error of Approximation (RMSEA) was 0.05. The CFI was below the 0.95 criterion for acceptability and the RMSEA value was within the limits for acceptability suggested by Hu and Bentler, 1999. We suggest that results provide some support for validity of the BRUMS for use with Malaysian athletes. Given the large sample size used in the present study, descriptive statistics could be used as normative data for Malaysian athletes. Key points Findings from the present study lend support to the validity of the BRUMS for use with Malaysian athletes. Given the size of the sample used in the present study, we suggest descriptive data be used as the normative data for researchers using the scale with Malaysian athletes. It is suggested that future research investigate the effects of cultural differences on emotional states experienced by athletes before, during and post-competition. PMID:24149128

  2. PSO (Particle Swarm Optimization) for Interpretation of Magnetic Anomalies Caused by Simple Geometrical Structures

    NASA Astrophysics Data System (ADS)

    Essa, Khalid S.; Elhussein, Mahmoud

    2018-04-01

    A new efficient approach to estimate parameters that controlled the source dimensions from magnetic anomaly profile data in light of PSO algorithm (particle swarm optimization) has been presented. The PSO algorithm has been connected in interpreting the magnetic anomaly profiles data onto a new formula for isolated sources embedded in the subsurface. The model parameters deciphered here are the depth of the body, the amplitude coefficient, the angle of effective magnetization, the shape factor and the horizontal coordinates of the source. The model parameters evaluated by the present technique, generally the depth of the covered structures were observed to be in astounding concurrence with the real parameters. The root mean square (RMS) error is considered as a criterion in estimating the misfit between the observed and computed anomalies. Inversion of noise-free synthetic data, noisy synthetic data which contains different levels of random noise (5, 10, 15 and 20%) as well as multiple structures and in additional two real-field data from USA and Egypt exhibits the viability of the approach. Thus, the final results of the different parameters are matched with those given in the published literature and from geologic results.

  3. Moving-window dynamic optimization: design of stimulation profiles for walking.

    PubMed

    Dosen, Strahinja; Popović, Dejan B

    2009-05-01

    The overall goal of the research is to improve control for electrical stimulation-based assistance of walking in hemiplegic individuals. We present the simulation for generating offline input (sensors)-output (intensity of muscle stimulation) representation of walking that serves in synthesizing a rule-base for control of electrical stimulation for restoration of walking. The simulation uses new algorithm termed moving-window dynamic optimization (MWDO). The optimization criterion was to minimize the sum of the squares of tracking errors from desired trajectories with the penalty function on the total muscle efforts. The MWDO was developed in the MATLAB environment and tested using target trajectories characteristic for slow-to-normal walking recorded in healthy individual and a model with the parameters characterizing the potential hemiplegic user. The outputs of the simulation are piecewise constant intensities of electrical stimulation and trajectories generated when the calculated stimulation is applied to the model. We demonstrated the importance of this simulation by showing the outputs for healthy and hemiplegic individuals, using the same target trajectories. Results of the simulation show that the MWDO is an efficient tool for analyzing achievable trajectories and for determining the stimulation profiles that need to be delivered for good tracking.

  4. Investigation of optical current transformer signal processing method based on an improved Kalman algorithm

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Ge, Jin-ming; Zhang, Guo-qing; Yu, Wen-bin; Liu, Rui-tong; Fan, Wei; Yang, Ying-xuan

    2018-01-01

    This paper explores the problem of signal processing in optical current transformers (OCTs). Based on the noise characteristics of OCTs, such as overlapping signals, noise frequency bands, low signal-to-noise ratios, and difficulties in acquiring statistical features of noise power, an improved standard Kalman filtering algorithm was proposed for direct current (DC) signal processing. The state-space model of the OCT DC measurement system is first established, and then mixed noise can be processed by adding mixed noise into measurement and state parameters. According to the minimum mean squared error criterion, state predictions and update equations of the improved Kalman algorithm could be deduced based on the established model. An improved central difference Kalman filter was proposed for alternating current (AC) signal processing, which improved the sampling strategy and noise processing of colored noise. Real-time estimation and correction of noise were achieved by designing AC and DC noise recursive filters. Experimental results show that the improved signal processing algorithms had a good filtering effect on the AC and DC signals with mixed noise of OCT. Furthermore, the proposed algorithm was able to achieve real-time correction of noise during the OCT filtering process.

  5. Evaluation of effectiveness of wavelet based denoising schemes using ANN and SVM for bearing condition classification.

    PubMed

    Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  6. Accuracy of different sensors for the estimation of pollutant concentrations (total suspended solids, total and dissolved chemical oxygen demand) in wastewater and stormwater.

    PubMed

    Lepot, Mathieu; Aubin, Jean-Baptiste; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Many field investigations have used continuous sensors (turbidimeters and/or ultraviolet (UV)-visible spectrophotometers) to estimate with a short time step pollutant concentrations in sewer systems. Few, if any, publications compare the performance of various sensors for the same set of samples. Different surrogate sensors (turbidity sensors, UV-visible spectrophotometer, pH meter, conductivity meter and microwave sensor) were tested to link concentrations of total suspended solids (TSS), total and dissolved chemical oxygen demand (COD), and sensors' outputs. In the combined sewer at the inlet of a wastewater treatment plant, 94 samples were collected during dry weather, 44 samples were collected during wet weather, and 165 samples were collected under both dry and wet weather conditions. From these samples, triplicate standard laboratory analyses were performed and corresponding sensors outputs were recorded. Two outlier detection methods were developed, based, respectively, on the Mahalanobis and Euclidean distances. Several hundred regression models were tested, and the best ones (according to the root mean square error criterion) are presented in order of decreasing performance. No sensor appears as the best one for all three investigated pollutants.

  7. Utility of ultrasound for body fat assessment: validity and reliability compared to a multicompartment criterion.

    PubMed

    Smith-Ryan, Abbie E; Blue, Malia N M; Trexler, Eric T; Hirsch, Katie R

    2018-03-01

    Measurement of body composition to assess health risk and prevention is expanding. Accurate portable techniques are needed to facilitate use in clinical settings. This study evaluated the accuracy and repeatability of a portable ultrasound (US) in comparison with a four-compartment criterion for per cent body fat (%Fat) in overweight/obese adults. Fifty-one participants (mean ± SD; age: 37·2 ± 11·3 years; BMI: 31·6 ± 5·2 kg m -2 ) were measured for %Fat using US (GE Logiq-e) and skinfolds. A subset of 36 participants completed a second day of the same measurements, to determine reliability. US and skinfold %Fat were calculated using the seven-site Jackson-Pollock equation. The Wang 4C model was used as the criterion method for %Fat. Compared to a gold standard criterion, US %Fat (36·4 ± 11·8%; P = 0·001; standard error of estimate [SEE] = 3·5%) was significantly higher than the criterion (33·0 ± 8·0%), but not different than skinfolds (35·3 ± 5·9%; P = 0·836; SEE = 4·5%). US resulted in good reliability, with no significant differences from Day 1 (39·95 ± 15·37%) to Day 2 (40·01 ± 15·42%). Relative consistency was 0·96, and standard error of measure was 0·94%. Although US overpredicted %Fat compared to the criterion, a moderate SEE for US is suggestive of a practical assessment tool in overweight individuals. %Fat differences reported from these field-based techniques are less than reported by other single-measurement laboratory methods and therefore may have utility in a clinical setting. This technique may also accurately track changes. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  8. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1985-01-01

    The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)

  9. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility. Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility. End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test). When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility. PMID:24570599

  10. Criterion learning in rule-based categorization: Simulation of neural mechanism and new data

    PubMed Central

    Helie, Sebastien; Ell, Shawn W.; Filoteo, J. Vincent; Maddox, W. Todd

    2015-01-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g, categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define ‘long’ and ‘short’). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL’s implications for future research on rule learning. PMID:25682349

  11. Criterion learning in rule-based categorization: simulation of neural mechanism and new data.

    PubMed

    Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd

    2015-04-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Modelling of the batch biosorption system: study on exchange of protons with cell wall-bound mineral ions.

    PubMed

    Mishra, Vishal

    2015-01-01

    The interchange of the protons with the cell wall-bound calcium and magnesium ions at the interface of solution/bacterial cell surface in the biosorption system at various concentrations of protons has been studied in the present work. A mathematical model for establishing the correlation between concentration of protons and active sites was developed and optimized. The sporadic limited residence time reactor was used to titrate the calcium and magnesium ions at the individual data point. The accuracy of the proposed mathematical model was estimated using error functions such as nonlinear regression, adjusted nonlinear regression coefficient, the chi-square test, P-test and F-test. The values of the chi-square test (0.042-0.017), P-test (<0.001-0.04), sum of square errors (0.061-0.016), root mean square error (0.01-0.04) and F-test (2.22-19.92) reported in the present research indicated the suitability of the model over a wide range of proton concentrations. The zeta potential of the bacterium surface at various concentrations of protons was observed to validate the denaturation of active sites.

  13. Simple Forest Canopy Thermal Exitance Model

    NASA Technical Reports Server (NTRS)

    Smith J. A.; Goltz, S. M.

    1999-01-01

    We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.

  14. Visual judgements of steadiness in one-legged stance: reliability and validity.

    PubMed

    Haupstein, T; Goldie, P

    2000-01-01

    There is a paucity of information about the validity and reliability of clinicians' visual judgements of steadiness in one-legged stance. Such judgements are used frequently in clinical practice to support decisions about treatment in the fields of neurology, sports medicine, paediatrics and orthopaedics. The aim of the present study was to address the validity and reliability of visual judgements of steadiness in one-legged stance in a group of physiotherapists. A videotape of 20 five-second performances was shown to 14 physiotherapists with median clinical experience of 6.75 years. Validity of visual judgement was established by correlating scores obtained from an 11-point rating scale with criterion scores obtained from a force platform. In addition, partial correlations were used to control for the potential influence of body weight on the relationship between the visual judgements and criterion scores. Inter-observer reliability was quantified between the physiotherapists; intra-observer reliability was quantified between two tests four weeks apart. Mean criterion-related validity was high, regardless of whether body weight was controlled for statistically (Pearson's r = 0.84, 0.83, respectively). The standard error of estimating the criterion score was 3.3 newtons. Inter-observer reliability was high (ICC (2,1) = 0.81 at Test 1 and 0.82 at Test 2). Intra-observer reliability was high (on average ICC (2,1) = 0.88; Pearson's r = 0.90). The standard error of measurement for the 11-point scale was one unit. The finding of higher accuracy of making visual judgements than previously reported may be due to several aspects of design: use of a criterion score derived from the variability of the force signal which is more discriminating than variability of centre of pressure; use of a discriminating visual rating scale; specificity and clear definition of the phenomenon to be rated.

  15. Efficient DV-HOP Localization for Wireless Cyber-Physical Social Sensing System: A Correntropy-Based Neural Network Learning Scheme

    PubMed Central

    Xu, Yang; Luo, Xiong; Wang, Weiping; Zhao, Wenbing

    2017-01-01

    Integrating wireless sensor network (WSN) into the emerging computing paradigm, e.g., cyber-physical social sensing (CPSS), has witnessed a growing interest, and WSN can serve as a social network while receiving more attention from the social computing research field. Then, the localization of sensor nodes has become an essential requirement for many applications over WSN. Meanwhile, the localization information of unknown nodes has strongly affected the performance of WSN. The received signal strength indication (RSSI) as a typical range-based algorithm for positioning sensor nodes in WSN could achieve accurate location with hardware saving, but is sensitive to environmental noises. Moreover, the original distance vector hop (DV-HOP) as an important range-free localization algorithm is simple, inexpensive and not related to the environment factors, but performs poorly when lacking anchor nodes. Motivated by these, various improved DV-HOP schemes with RSSI have been introduced, and we present a new neural network (NN)-based node localization scheme, named RHOP-ELM-RCC, through the use of DV-HOP, RSSI and a regularized correntropy criterion (RCC)-based extreme learning machine (ELM) algorithm (ELM-RCC). Firstly, the proposed scheme employs both RSSI and DV-HOP to evaluate the distances between nodes to enhance the accuracy of distance estimation at a reasonable cost. Then, with the help of ELM featured with a fast learning speed with a good generalization performance and minimal human intervention, a single hidden layer feedforward network (SLFN) on the basis of ELM-RCC is used to implement the optimization task for obtaining the location of unknown nodes. Since the RSSI may be influenced by the environmental noises and may bring estimation error, the RCC instead of the mean square error (MSE) estimation, which is sensitive to noises, is exploited in ELM. Hence, it may make the estimation more robust against outliers. Additionally, the least square estimation (LSE) in ELM is replaced by the half-quadratic optimization technique. Simulation results show that our proposed scheme outperforms other traditional localization schemes. PMID:28085084

  16. Modeling Seasonality in Carbon Dioxide Emissions From Fossil Fuel Consumption

    NASA Astrophysics Data System (ADS)

    Gregg, J. S.; Andres, R. J.

    2004-05-01

    Using United States data, a method is developed to estimate the monthly consumption of solid, liquid and gaseous fossil fuels using monthly sales data to estimate the relative monthly proportions of the total annual national fossil fuel use. These proportions are then used to estimate the total monthly carbon dioxide emissions for each state. From these data, the goal is to develop mathematical models that describe the seasonal flux in consumption for each type of fuel, as well as the total emissions for the nation. The time series models have two components. First, the general long-term yearly trend is determined with regression models for the annual totals. After removing the general trend, two alternatives are considered for modeling the seasonality. The first alternative uses the mean of the monthly proportions to predict the seasonal distribution. Because the seasonal patterns are fairly consistent in the United States, this is an effective modeling technique. Such regularity, however, may not be present with data from other nations. Therefore, as a second alternative, an ordinary least squares autoregressive model is used. This model is chosen for its ability to accurately describe dependent data and for its predictive capacity. It also has a meaningful interpretation, as each coefficient in the model quantifies the dependency for each corresponding time lag. Most importantly, it is dynamic, and able to adapt to anomalies and changing patterns. The order of the autoregressive model is chosen by the Akaike Information Criterion (AIC), which minimizes the predicted variance for all models of increasing complexity. To model the monthly fuel consumption, the annual trend is combined with the seasonal model. The models for each fuel type are then summed together to predict the total carbon dioxide emissions. The prediction error is estimated with the root mean square error (RMSE) from the actual estimated emission values. Overall, the models perform very well, with relative RMSE less than 10% for all fuel types, and under 5% for the national total emissions. Development of successful models is important to better understand and predict global environmental impacts from fossil fuel consumption.

  17. Mapping health assessment questionnaire disability index (HAQ-DI) score, pain visual analog scale (VAS), and disease activity score in 28 joints (DAS28) onto the EuroQol-5D (EQ-5D) utility score with the KORean Observational study Network for Arthritis (KORONA) registry data.

    PubMed

    Kim, Hye-Lin; Kim, Dam; Jang, Eun Jin; Lee, Min-Young; Song, Hyun Jin; Park, Sun-Young; Cho, Soo-Kyung; Sung, Yoon-Kyoung; Choi, Chan-Bum; Won, Soyoung; Bang, So-Young; Cha, Hoon-Suk; Choe, Jung-Yoon; Chung, Won Tae; Hong, Seung-Jae; Jun, Jae-Bum; Kim, Jinseok; Kim, Seong-Kyu; Kim, Tae-Hwan; Kim, Tae-Jong; Koh, Eunmi; Lee, Hwajeong; Lee, Hye-Soon; Lee, Jisoo; Lee, Shin-Seok; Lee, Sung Won; Park, Sung-Hoon; Shim, Seung-Cheol; Yoo, Dae-Hyun; Yoon, Bo Young; Bae, Sang-Cheol; Lee, Eui-Kyung

    2016-04-01

    The aim of this study was to estimate the mapping model for EuroQol-5D (EQ-5D) utility values using the health assessment questionnaire disability index (HAQ-DI), pain visual analog scale (VAS), and disease activity score in 28 joints (DAS28) in a large, nationwide cohort of rheumatoid arthritis (RA) patients in Korea. The KORean Observational study Network for Arthritis (KORONA) registry data on 3557 patients with RA were used. Data were randomly divided into a modeling set (80 % of the data) and a validation set (20 % of the data). The ordinary least squares (OLS), Tobit, and two-part model methods were employed to construct a model to map to the EQ-5D index. Using a combination of HAQ-DI, pain VAS, and DAS28, four model versions were examined. To evaluate the predictive accuracy of the models, the root-mean-square error (RMSE) and mean absolute error (MAE) were calculated using the validation dataset. A model that included HAQ-DI, pain VAS, and DAS28 produced the highest adjusted R (2) as well as the lowest Akaike information criterion, RMSE, and MAE, regardless of the statistical methods used in modeling set. The mapping equation of the OLS method is given as EQ-5D = 0.95-0.21 × HAQ-DI-0.24 × pain VAS/100-0.01 × DAS28 (adjusted R (2) = 57.6 %, RMSE = 0.1654 and MAE = 0.1222). Also in the validation set, the RMSE and MAE were shown to be the smallest. The model with HAQ-DI, pain VAS, and DAS28 showed the best performance, and this mapping model enabled the estimation of an EQ-5D value for RA patients in whom utility values have not been measured.

  18. Adaptive tracking control for a class of stochastic switched systems

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Xia, Yuanqing

    2018-02-01

    The problem of adaptive tracking is considered for a class of stochastic switched systems, in this paper. As preliminaries, the criterion of global asymptotical practical stability in probability is first presented by the aid of common Lyapunov function method. Based on the Lyapunov stability criterion, adaptive backstepping controllers are designed to guarantee that the closed-loop system has a unique global solution, which is globally asymptotically practically stable in probability, and the tracking error in the fourth moment converges to an arbitrarily small neighbourhood of zero. Simulation examples are given to demonstrate the efficiency of the proposed schemes.

  19. Finite-Time Adaptive Control for a Class of Nonlinear Systems With Nonstrict Feedback Structure.

    PubMed

    Sun, Yumei; Chen, Bing; Lin, Chong; Wang, Honghong

    2017-09-18

    This paper focuses on finite-time adaptive neural tracking control for nonlinear systems in nonstrict feedback form. A semiglobal finite-time practical stability criterion is first proposed. Correspondingly, the finite-time adaptive neural control strategy is given by using this criterion. Unlike the existing results on adaptive neural/fuzzy control, the proposed adaptive neural controller guarantees that the tracking error converges to a sufficiently small domain around the origin in finite time, and other closed-loop signals are bounded. At last, two examples are used to test the validity of our results.

  20. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  1. Measuring Dispersion Effects of Factors in Factorial Experiments.

    DTIC Science & Technology

    1988-01-01

    error is MSE =i=l j=1 i n r (SSE/(N-p)), the sum of squares of pure error is SSPE = Z E Y i=1 j=1 and the mean square of pure error is MSPE - ( SSPE /n...the level of the factor in the ith run is 0. 3.1. First Measure We have n r n r SSPE = 1 Is it -yi) 2 + E r (1-8 )(yjj li-l j=l (iYjj +i= j=l l - i...The first component in SSPE corresponds to level I of the factor and has n degrees of freedom ( E 6i)(r-I). The second component corresponds to i=l n

  2. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  3. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  4. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  5. Measurement of peak impact loads differ between accelerometers - Effects of system operating range and sampling rate.

    PubMed

    Ziebart, Christina; Giangregorio, Lora M; Gibbs, Jenna C; Levine, Iris C; Tung, James; Laing, Andrew C

    2017-06-14

    A wide variety of accelerometer systems, with differing sensor characteristics, are used to detect impact loading during physical activities. The study examined the effects of system characteristics on measured peak impact loading during a variety of activities by comparing outputs from three separate accelerometer systems, and by assessing the influence of simulated reductions in operating range and sampling rate. Twelve healthy young adults performed seven tasks (vertical jump, box drop, heel drop, and bilateral single leg and lateral jumps) while simultaneously wearing three tri-axial accelerometers including a criterion standard laboratory-grade unit (Endevco 7267A) and two systems primarily used for activity-monitoring (ActiGraph GT3X+, GCDC X6-2mini). Peak acceleration (gmax) was compared across accelerometers, and errors resulting from down-sampling (from 640 to 100Hz) and range-limiting (to ±6g) the criterion standard output were characterized. The Actigraph activity-monitoring accelerometer underestimated gmax by an average of 30.2%; underestimation by the X6-2mini was not significant. Underestimation error was greater for tasks with greater impact magnitudes. gmax was underestimated when the criterion standard signal was down-sampled (by an average of 11%), range limited (by 11%), and by combined down-sampling and range-limiting (by 18%). These effects explained 89% of the variance in gmax error for the Actigraph system. This study illustrates that both the type and intensity of activity should be considered when selecting an accelerometer for characterizing impact events. In addition, caution may be warranted when comparing impact magnitudes from studies that use different accelerometers, and when comparing accelerometer outputs to osteogenic impact thresholds proposed in literature. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  6. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  7. Super-linear Precision in Simple Neural Population Codes

    NASA Astrophysics Data System (ADS)

    Schwab, David; Fiete, Ila

    2015-03-01

    A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.

  8. Importance of spatial autocorrelation in modeling bird distributions at a continental scale

    USGS Publications Warehouse

    Bahn, V.; O'Connor, R.J.; Krohn, W.B.

    2006-01-01

    Spatial autocorrelation in species' distributions has been recognized as inflating the probability of a type I error in hypotheses tests, causing biases in variable selection, and violating the assumption of independence of error terms in models such as correlation or regression. However, it remains unclear whether these problems occur at all spatial resolutions and extents, and under which conditions spatially explicit modeling techniques are superior. Our goal was to determine whether spatial models were superior at large extents and across many different species. In addition, we investigated the importance of purely spatial effects in distribution patterns relative to the variation that could be explained through environmental conditions. We studied distribution patterns of 108 bird species in the conterminous United States using ten years of data from the Breeding Bird Survey. We compared the performance of spatially explicit regression models with non-spatial regression models using Akaike's information criterion. In addition, we partitioned the variance in species distributions into an environmental, a pure spatial and a shared component. The spatially-explicit conditional autoregressive regression models strongly outperformed the ordinary least squares regression models. In addition, partialling out the spatial component underlying the species' distributions showed that an average of 17% of the explained variation could be attributed to purely spatial effects independent of the spatial autocorrelation induced by the underlying environmental variables. We concluded that location in the range and neighborhood play an important role in the distribution of species. Spatially explicit models are expected to yield better predictions especially for mobile species such as birds, even in coarse-grained models with a large extent. ?? Ecography.

  9. The Performance of Five Bioelectrical Impedance Analysis Prediction Equations against Dual X-ray Absorptiometry in Estimating Appendicular Skeletal Muscle Mass in an Adult Australian Population

    PubMed Central

    Yu, Solomon C. Y.; Powell, Alice; Khow, Kareeann S. F.; Visvanathan, Renuka

    2016-01-01

    Appendicular skeletal muscle mass (ASM) is a diagnostic criterion for sarcopenia. Bioelectrical impedance analysis (BIA) offers a bedside approach to measure ASM but the performance of BIA prediction equations (PE) varies with ethnicities and body composition. We aim to validate the performance of five PEs in estimating ASM against estimation by dual-energy X-ray absorptiometry (DXA). We recruited 195 healthy adult Australians and ASM was measured using single-frequency BIA. Bland-Altman analysis was used to assess the predictive accuracy of ASM as determined by BIA against DXA. Precision (root mean square error (RMSE)) and bias (mean error (ME)) were calculated according to the method of Sheiner and Beal. Four PEs (except that by Kim) showed ASM values that correlated strongly with ASMDXA (r ranging from 0.96 to 0.97, p < 0.001). The Sergi equation performed the best with the lowest ME of −1.09 kg (CI: −0.84–−1.34, p < 0.001) and the RMSE was 2.09 kg (CI: 1.72–2.47). In men, the Kyle equation performed better with the lowest ME (−0.32 kg (CI: −0.66–0.02) and RMSE (1.54 kg (CI: 1.14–1.93)). The Sergi equation is applicable in adult Australians (Caucasian) whereas the Kyle equation can be considered in males. The need remains to validate PEs in other ethnicities and to develop equations suitable for multi-frequency BIA. PMID:27043617

  10. Flipped clinical training: a structured training method for undergraduates in complete denture prosthesis.

    PubMed

    K, Anbarasi; K, Kasim Mohamed; Vijayaraghavan, Phagalvarthy; Kandaswamy, Deivanayagam

    2016-12-01

    To design and implement flipped clinical training for undergraduate dental students in removable complete denture treatment and predict its effectiveness by comparing the assessment results of students trained by flipped and traditional methods. Flipped training was designed by shifting the learning from clinics to learning center (phase I) and by preserving the practice in clinics (phase II). In phase I, student-faculty interactive session was arranged to recap prior knowledge. This is followed by a display of audio synchronized video demonstration of the procedure in a repeatable way and subsequent display of possible errors that may occur in treatment with guidelines to overcome such errors. In phase II, live demonstration of the procedure was given. Students were asked to treat three patients under instructor's supervision. The summative assessment was conducted by applying the same checklist criterion and rubric scoring used for the traditional method. Assessment results of three batches of students trained by flipped method (study group) and three traditionally trained previous batches (control group) were taken for comparison by chi-square test. The sum of traditionally trained three batch students who prepared acceptable dentures (score: 2 and 3) and unacceptable dentures (score: 1) was compared with the same of flipped trained three batch students revealed that the number of students who demonstrated competency by preparing acceptable dentures was higher for flipped training (χ 2 =30.996 with p<0.001). The results reveal the supremacy of flipped training in enhancing students competency and hence recommended for training various clinical procedures.

  11. Automated time series forecasting for biosurveillance.

    PubMed

    Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit

    2007-09-30

    For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.

  12. Automated algorithm for mapping regions of cold-air pooling in complex terrain

    NASA Astrophysics Data System (ADS)

    Lundquist, Jessica D.; Pepin, Nicholas; Rochford, Caitlin

    2008-11-01

    In complex terrain, air in contact with the ground becomes cooled from radiative energy loss on a calm clear night and, being denser than the free atmosphere at the same elevation, sinks to valley bottoms. Cold-air pooling (CAP) occurs where this cooled air collects on the landscape. This article focuses on identifying locations on a landscape subject to considerably lower minimum temperatures than the regional average during conditions of clear skies and weak synoptic-scale winds, providing a simple automated method to map locations where cold air is likely to pool. Digital elevation models of regions of complex terrain were used to derive surfaces of local slope, curvature, and percentile elevation relative to surrounding terrain. Each pixel was classified as prone to CAP, not prone to CAP, or exhibiting no signal, based on the criterion that CAP occurs in regions with flat slopes in local depressions or valleys (negative curvature and low percentile). Along-valley changes in the topographic amplification factor (TAF) were then calculated to determine whether the cold air in the valley was likely to drain or pool. Results were checked against distributed temperature measurements in Loch Vale, Rocky Mountain National Park, Colorado; in the Eastern Pyrenees, France; and in Yosemite National Park, Sierra Nevada, California. Using CAP classification to interpolate temperatures across complex terrain resulted in improvements in root-mean-square errors compared to more basic interpolation techniques at most sites within the three areas examined, with average error reductions of up to 3°C at individual sites and about 1°C averaged over all sites in the study areas.

  13. Performance Evaluation of MIMO-UWB Systems Using Measured Propagation Data and Proposal of Timing Control Scheme in LOS Environments

    NASA Astrophysics Data System (ADS)

    Takanashi, Masaki; Nishimura, Toshihiko; Ogawa, Yasutaka; Ohgane, Takeo

    Ultrawide-band impulse radio (UWB-IR) technology and multiple-input multiple-output (MIMO) systems have attracted interest regarding their use in next-generation high-speed radio communication. We have studied the use of MIMO ultrawide-band (MIMO-UWB) systems to enable higher-speed radio communication. We used frequency-domain equalization based on the minimum mean square error criterion (MMSE-FDE) to reduce intersymbol interference (ISI) and co-channel interference (CCI) in MIMO-UWB systems. Because UWB systems are expected to be used for short-range wireless communication, MIMO-UWB systems will usually operate in line-of-sight (LOS) environments and direct waves will be received at the receiver side. Direct waves have high power and cause high correlations between antennas in such environments. Thus, it is thought that direct waves will adversely affect the performance of spatial filtering and equalization techniques used to enhance signal detection. To examine the feasibility of MIMO-UWB systems, we conducted MIMO-UWB system propagation measurements in LOS environments. From the measurements, we found that the arrival time of direct waves from different transmitting antennas depends on the MIMO configuration. Because we can obtain high power from the direct waves, direct wave reception is critical for maximizing transmission performance. In this paper, we present our measurement results, and propose a way to improve performance using a method of transmit (Tx) and receive (Rx) timing control. We evaluate the bit error rate (BER) performance for this form of timing control using measured channel data.

  14. 10 CFR 140.84 - Criterion I-Substantial discharge of radioactive material or substantial radiation levels offsite.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Radioactive material that may be taken into the body from its occurrence in air or water; and (3) Radioactive... Commission finds that: (1) Surface contamination of at least a total of any 100 square meters of offsite... facility and such contamination is characterized by levels of radiation in excess of one of the values...

  15. Reliability considerations in the placement of control system components

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1983-01-01

    This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.

  16. Determination of fiber-matrix interface failure parameters from off-axis tests

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1993-01-01

    Critical fiber-matrix (FM) interface strength parameters were determined using a micromechanics-based approach together with failure data from off-axis tension (OAT) tests. The ply stresses at failure for a range of off-axis angles were used as input to a micromechanics analysis that was performed using the personal computer-based MICSTRAN code. FM interface stresses at the failure loads were calculated for both the square and the diamond array models. A simple procedure was developed to determine which array had the more severe FM interface stresses and the location of these critical stresses on the interface. For the cases analyzed, critical FM interface stresses were found to occur with the square array model and were located at a point where adjacent fibers were closest together. The critical FM interface stresses were used together with the Tsai-Wu failure theory to determine a failure criterion for the FM interface. This criterion was then used to predict the onset of ply cracking in angle-ply laminates for a range of laminate angles. Predictions for the onset of ply cracking in angle-ply laminates agreed with the test data trends.

  17. Geospatial distribution modeling and determining suitability of groundwater quality for irrigation purpose using geospatial methods and water quality index (WQI) in Northern Ethiopia

    NASA Astrophysics Data System (ADS)

    Gidey, Amanuel

    2018-06-01

    Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.

  18. Methods for estimating the magnitude and frequency of peak streamflows at ungaged sites in and near the Oklahoma Panhandle

    USGS Publications Warehouse

    Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.

    2015-09-28

    Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.

  19. [NIR Assignment of Magnolol by 2D-COS Technology and Model Application Huoxiangzhengqi Oral Liduid].

    PubMed

    Pei, Yan-ling; Wu, Zhi-sheng; Shi, Xin-yuan; Pan, Xiao-ning; Peng, Yan-fang; Qiao, Yan-jiang

    2015-08-01

    Near infrared (NIR) spectroscopy assignment of Magnolol was performed using deuterated chloroform solvent and two-dimensional correlation spectroscopy (2D-COS) technology. According to the synchronous spectra of deuterated chloroform solvent and Magnolol, 1365~1455, 1600~1720, 2000~2181 and 2275~2465 nm were the characteristic absorption of Magnolol. Connected with the structure of Magnolol, 1440 nm was the stretching vibration of phenolic group O-H, 1679 nm was the stretching vibration of aryl and methyl which connected with aryl, 2117, 2304, 2339 and 2370 nm were the combination of the stretching vibration, bending vibration and deformation vibration for aryl C-H, 2445 nm were the bending vibration of methyl which linked with aryl group, these bands attribut to the characteristics of Magnolol. Huoxiangzhengqi Oral Liduid was adopted to study the Magnolol, the characteristic band by spectral assignment and the band by interval Partial Least Squares (iPLS) and Synergy interval Partial Least Squares (SiPLS) were used to establish Partial Least Squares (PLS) quantitative model, the coefficient of determination Rcal(2) and Rpre(2) were greater than 0.99, the Root Mean of Square Error of Calibration (RM-SEC), Root Mean of Square Error of Cross Validation (RMSECV) and Root Mean of Square Error of Prediction (RMSEP) were very small. It indicated that the characteristic band by spectral assignment has the same results with the Chemometrics in PLS model. It provided a reference for NIR spectral assignment of chemical compositions in Chinese Materia Medica, and the band filters of NIR were interpreted.

  20. A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging

    PubMed Central

    Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu

    2017-01-01

    This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583

  1. Validation of powder X-ray diffraction following EN ISO/IEC 17025.

    PubMed

    Eckardt, Regina; Krupicka, Erik; Hofmeister, Wolfgang

    2012-05-01

    Powder X-ray diffraction (PXRD) is used widely in forensic science laboratories with the main focus of qualitative phase identification. Little is found in literature referring to the topic of validation of PXRD in the field of forensic sciences. According to EN ISO/IEC 17025, the method has to be tested for several parameters. Trueness, specificity, and selectivity of PXRD were tested using certified reference materials or a combination thereof. All three tested parameters showed the secure performance of the method. Sample preparation errors were simulated to evaluate the robustness of the method. These errors were either easily detected by the operator or nonsignificant for phase identification. In case of the detection limit, a statistical evaluation of the signal-to-noise ratio showed that a peak criterion of three sigma is inadequate and recommendations for a more realistic peak criterion are given. Finally, the results of an international proficiency test showed the secure performance of PXRD. © 2012 American Academy of Forensic Sciences.

  2. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    PubMed

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  3. Reliability, Validity, and Classification Accuracy of the DSM-5 Diagnostic Criteria for Gambling Disorder and Comparison to DSM-IV.

    PubMed

    Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C

    2016-09-01

    The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.

  4. Forecasting mortality of road traffic injuries in China using seasonal autoregressive integrated moving average model.

    PubMed

    Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun

    2015-02-01

    Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. An empirical model for estimating solar radiation in the Algerian Sahara

    NASA Astrophysics Data System (ADS)

    Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous

    2018-05-01

    The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.

  6. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  7. Effective Algorithm for Detection and Correction of the Wave Reconstruction Errors Caused by the Tilt of Reference Wave in Phase-shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying

    2010-04-01

    In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.

  8. Validation of Core Temperature Estimation Algorithm

    DTIC Science & Technology

    2016-01-29

    plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.

  9. Systematic Error Modeling and Bias Estimation

    PubMed Central

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  10. The criterion for time symmetry of probabilistic theories and the reversibility of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Holster, A. T.

    2003-10-01

    Physicists routinely claim that the fundamental laws of physics are 'time symmetric' or 'time reversal invariant' or 'reversible'. In particular, it is claimed that the theory of quantum mechanics is time symmetric. But it is shown in this paper that the orthodox analysis suffers from a fatal conceptual error, because the logical criterion for judging the time symmetry of probabilistic theories has been incorrectly formulated. The correct criterion requires symmetry between future-directed laws and past-directed laws. This criterion is formulated and proved in detail. The orthodox claim that quantum mechanics is reversible is re-evaluated. The property demonstrated in the orthodox analysis is shown to be quite distinct from time reversal invariance. The view of Satosi Watanabe that quantum mechanics is time asymmetric is verified, as well as his view that this feature does not merely show a de facto or 'contingent' asymmetry, as commonly supposed, but implies a genuine failure of time reversal invariance of the laws of quantum mechanics. The laws of quantum mechanics would be incompatible with a time-reversed version of our universe.

  11. A mathematical programming method for formulating a fuzzy regression model based on distance criterion.

    PubMed

    Chen, Liang-Hsuan; Hsueh, Chan-Ching

    2007-06-01

    Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.

  12. Photogrammetric Method and Software for Stream Planform Identification

    NASA Astrophysics Data System (ADS)

    Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.

    2013-12-01

    Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points

  13. Cursor Control Device Test Battery

    NASA Technical Reports Server (NTRS)

    Holden, Kritina; Sandor, Aniko; Pace, John; Thompson, Shelby

    2013-01-01

    The test battery was developed to provide a standard procedure for cursor control device evaluation. The software was built in Visual Basic and consists of nine tasks and a main menu that integrates the set-up of the tasks. The tasks can be used individually, or in a series defined in the main menu. Task 1, the Unidirectional Pointing Task, tests the speed and accuracy of clicking on targets. Two rectangles with an adjustable width and adjustable center- to-center distance are presented. The task is to click back and forth between the two rectangles. Clicks outside of the rectangles are recorded as errors. Task 2, Multidirectional Pointing Task, measures speed and accuracy of clicking on targets approached from different angles. Twenty-five numbered squares of adjustable width are arranged around an adjustable diameter circle. The task is to point and click on the numbered squares (placed on opposite sides of the circle) in consecutive order. Clicks outside of the squares are recorded as errors. Task 3, Unidirectional (horizontal) Dragging Task, is similar to dragging a file into a folder on a computer desktop. Task 3 requires dragging a square of adjustable width from one rectangle and dropping it into another. The width of each rectangle is adjustable, as well as the distance between the two rectangles. Dropping the square outside of the rectangles is recorded as an error. Task 4, Unidirectional Path Following, is similar to Task 3. The task is to drag a square through a tunnel consisting of two lines. The size of the square and the width of the tunnel are adjustable. If the square touches any of the lines, it is counted as an error and the task is restarted. Task 5, Text Selection, involves clicking on a Start button, and then moving directly to the underlined portion of the displayed text and highlighting it. The pointing distance to the text is adjustable, as well as the to-be-selected font size and the underlined character length. If the selection does not include all of the underlined characters, or includes non-underlined characters, it is recorded as an error. Task 6, Multi-size and Multi-distance Pointing, presents the participant with 24 consecutively numbered buttons of different sizes (63 to 163 pixels), and at different distances (60 to 80 pixels) from the Start button. The task is to click on the Start button, and then move directly to, and click on, each numbered target button in consecutive order. Clicks outside of the target area are errors. Task 7, Standard Interface Elements Task, involves interacting with standard interface elements as instructed in written procedures, including: drop-down menus, sliders, text boxes, radio buttons, and check boxes. Task completion time is recorded. In Task 8, a circular track is presented with a disc in it at the top. Track width and disc size are adjustable. The task is to move the disc with circular motion within the path without touching the boundaries of the track. Time and errors are recorded. Task 9 is a discrete task that allows evaluation of discrete cursor control devices that tab from target to target, such as a castle switch. The task is to follow a predefined path and to click on the yellow targets along the path.

  14. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  15. Validity Arguments for Diagnostic Assessment Using Automated Writing Evaluation

    ERIC Educational Resources Information Center

    Chapelle, Carol A.; Cotos, Elena; Lee, Jooyoung

    2015-01-01

    Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). "Criterion"®, was developed by Educational Testing Service to analyze students' papers grammatically, providing sentence-level error feedback. An interpretive argument was developed for its use as part of…

  16. Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields

    NASA Astrophysics Data System (ADS)

    Bettadpur, S.

    2012-04-01

    The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.

  17. Retrieval of the aerosol optical thickness from UV global irradiance measurements

    NASA Astrophysics Data System (ADS)

    Costa, M. J.; Salgueiro, V.; Bortoli, D.; Obregón, M. A.; Antón, M.; Silva, A. M.

    2015-12-01

    The UV irradiance is measured at Évora since several years, where a CIMEL sunphotometer integrated in AERONET is also installed. In the present work, measurements of UVA (315 - 400 nm) irradiances taken with Kipp&Zonen radiometers, as well as satellite data of ozone total column values, are used in combination with radiative transfer calculations, to estimate the aerosol optical thickness (AOT) in the UV. The retrieved UV AOT in Évora is compared with AERONET AOT (at 340 and 380 nm) and a fairly good agreement is found with a root mean square error of 0.05 (normalized root mean square error of 8.3%) and a mean absolute error of 0.04 (mean percentage error of 2.9%). The methodology is then used to estimate the UV AOT in Sines, an industrialized site on the Atlantic western coast, where the UV irradiance is monitored since 2013 but no aerosol information is available.

  18. Uncertainty based pressure reconstruction from velocity measurement with generalized least squares

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos

    2017-11-01

    A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.

  19. Repeated readings and science: Fluency with expository passages

    NASA Astrophysics Data System (ADS)

    Kostewicz, Douglas E.

    The current study investigated the effects of repeated readings to a fluency criterion (RRFC) for seven students with disabilities using science text. The study employed a single subject design, specifically, two multiple probe multiple baselines across subjects, to evaluate the effects of the RRFC intervention. Results indicated that students met criterion (200 or more correct words per minute with 2 or fewer errors) on four consecutive passages. A majority of students displayed accelerations to correct words per minute and decelerations to incorrect words per minute on successive initial, intervention readings suggesting reading transfer. Students' reading scores during posttest and maintenance out performed pre-test and baseline readings provided additional measures of reading transfer. For a relationship to comprehension, students scored higher on oral retell measures after meeting criterion as compared to initial readings. Overall, the research findings suggested that the RRFC intervention improves science reading fluency for students with disabilities, and may also indirectly benefit comprehension.

  20. Validity of BMI-Based Body Fat Equations in Men and Women: A 4-Compartment Model Comparison.

    PubMed

    Nickerson, Brett S; Esco, Michael R; Bishop, Phillip A; Fedewa, Michael V; Snarr, Ronald L; Kliszczewicz, Brian M; Park, Kyung-Shin

    2018-01-01

    Nickerson, BS, Esco, MR, Bishop, PA, Fedewa, MV, Snarr, RL, Kliszczewicz, BM, and Park, K-S. Validity of BMI-based body fat equations in men and women: a 4-compartment model comparison. J Strength Cond Res 32(1): 121-129, 2018-The purpose of this study was to compare body mass index (BMI)-based body fat percentage (BF%) equations and skinfolds with a 4-compartment (4C) model in men and women. One hundred thirty adults (63 women and 67 men) volunteered to participate (age = 23 ± 5 years). BMI was calculated as weight (kg) divided by height squared (m). BF% was predicted with the BMI-based equations of Jackson et al. (BMIJA), Deurenberg et al. (BMIDE), Gallagher et al. (BMIGA), Zanovec et al. (BMIZA), Womersley and Durnin (BMIWO), and from 7-site skinfolds using the generalized skinfold equation of Jackson et al. (SF7JP). The 4C model BF% was the criterion and derived from underwater weighing for body volume, dual-energy X-ray absorptiometry for bone mineral content, and bioimpedance spectroscopy for total body water. The constant error (CE) was not significantly different for BMIZA compared with the 4C model (p = 0.74, CE = -0.2%). However, BMIJA, BMIDE, BMIGA, and BMIWO produced significantly higher mean values than the 4C model (all p < 0.001, CEs = 1.8-3.2%), whereas SF7JP was significantly lower (p < 0.001, CE = -4.8%). The standard error of estimate ranged from 3.4 (SF7JP) to 6.4% (BMIJA) while the total error varied from 6.0 (SF7JP) to 7.3% (BMIJA). The 95% limits of agreement were the smallest for SF7JP (±7.2%) and widest for BMIJA (±13.5%). Although the BMI-based equations produced similar group mean values as the 4C model, SF7JP produced the smallest individual errors. Therefore, SF7JP is recommended over the BMI-based equations, but practitioners should consider the associated CE.

  1. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  2. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  3. Online measurement of urea concentration in spent dialysate during hemodialysis.

    PubMed

    Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J

    2004-01-01

    We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.

  4. Modal energy analysis for mechanical systems excited by spatially correlated loads

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Fei, Qingguo; Li, Yanbin; Wu, Shaoqing; Chen, Qiang

    2018-10-01

    MODal ENergy Analysis (MODENA) is an energy-based method, which is proposed to deal with vibroacoustic problems. The performance of MODENA on the energy analysis of a mechanical system under spatially correlated excitation is investigated. A plate/cavity coupling system excited by a pressure field is studied in a numerical example, in which four kinds of pressure fields are involved, which include the purely random pressure field, the perfectly correlated pressure field, the incident diffuse field, and the turbulent boundary layer pressure fluctuation. The total energies of subsystems differ to reference solution only in the case of purely random pressure field and only for the non-excited subsystem (the cavity). A deeper analysis on the scale of modal energy is further conducted via another numerical example, in which two structural modes excited by correlated forces are coupled with one acoustic mode. A dimensionless correlation strength factor is proposed to determine the correlation strength between modal forces. Results show that the error on modal energy increases with the increment of the correlation strength factor. A criterion is proposed to establish a link between the error and the correlation strength factor. According to the criterion, the error is negligible when the correlation strength is weak, in this situation the correlation strength factor is less than a critical value.

  5. Planning Training Workload in Football Using Small-Sided Games' Density.

    PubMed

    Sangnier, Sebastien; Cotte, Thierry; Brachet, Olivier; Coquart, Jeremy; Tourny, Claire

    2018-05-08

    Sangnier, S, Cotte, T, Brachet, O, Coquart, J, and Tourny, C. Planning training workload in football using small-sided games density. J Strength Cond Res XX(X): 000-000, 2018-To develop the physical qualities, the small-sided games' (SSGs) density may be essential in soccer. Small-sided games are games in which the pitch size, players' number, and rules are different to those for traditional soccer matches. The purpose was to assess the relation between training workload and SSGs' density. The 33 densities data (41 practice games and 3 full games) were analyzed through global positioning system (GPS) data collected from 25 professional soccer players (80.7 ± 7.0 kg; 1.83 ± 0.05 m; 26.4 ± 4.9 years). From total distance, distance metabolic power, sprint distance, and acceleration distance, the data GPS were divided into 4 categories: endurance, power, speed, and strength. Statistical analysis compared the relation between GPS values and SSGs' densities, and 3 methods were applied to assess models (R-squared, root-mean-square error, and Akaike information criterion). The results suggest that all the GPS data match the player's essential athletic skills. They were all correlated with the game's density. Acceleration distance, deceleration distance, metabolic power, and total distance followed a logarithmic regression model, whereas distance and number of sprints follow a linear regression model. The research reveals options to monitor the training workload. Coaches could anticipate the load resulting from the SSGs and adjust the field size to the players' number. Taking into account the field size during SSGs enables coaches to target the most favorable density for developing expected physical qualities. Calibrating intensity during SSGs would allow coaches to assess each athletic skill in the same conditions of intensity as in the competition.

  6. A new algorithm to build bridges between two patient-reported health outcome instruments: the MOS SF-36® and the VR-12 Health Survey.

    PubMed

    Selim, Alfredo; Rogers, William; Qian, Shirley; Rothendler, James A; Kent, Erin E; Kazis, Lewis E

    2018-04-19

    To develop bridging algorithms to score the Veterans Rand-12 (VR-12) scales for comparability to those of the SF-36® for facilitating multi-cohort studies using data from the National Cancer Institute Surveillance, Epidemiology, and End Results Program (SEER) linked to Medicare Health Outcomes Survey (MHOS), and to provide a model for minimizing non-statistical error in pooled analyses stemming from changes to survey instruments over time. Observational study of MHOS cohorts 1-12 (1998-2011). We modeled 2-year follow-up SF-36 scale scores from cohorts 1-6 based on baseline SF-36 scores, age, and gender, yielding 100 clusters using Classification and Regression Trees. Within each cluster, we averaged follow-up SF-36 scores. Using the same cluster specifications, expected follow-up SF-36 scores, based on cohorts 1-6, were computed for cohorts 7-8 (where the VR-12 was the follow-up survey). We created a new criterion validity measure, termed "extensibility," calculated from the square root of the mean square difference between expected SF-36 scale averages and observed VR-12 item score from cohorts 7-8, weighted by cluster size. VR-12 items were rescored to minimize this quantity. Extensibility of rescored VR-12 items and scales was considerably improved from the "simple" scoring method for comparability to the SF-36 scales. The algorithms are appropriate across a wide range of potential subsamples within the MHOS and provide robust application for future studies that span the SF-36 and VR-12 eras. It is possible that these surveys in a different setting outside the MHOS, especially in younger age groups, could produce somewhat different results.

  7. Comparison of Regression Methods to Compute Atmospheric Pressure and Earth Tidal Coefficients in Water Level Associated with Wenchuan Earthquake of 12 May 2008

    NASA Astrophysics Data System (ADS)

    He, Anhua; Singh, Ramesh P.; Sun, Zhaohua; Ye, Qing; Zhao, Gang

    2016-07-01

    The earth tide, atmospheric pressure, precipitation and earthquake fluctuations, especially earthquake greatly impacts water well levels, thus anomalous co-seismic changes in ground water levels have been observed. In this paper, we have used four different models, simple linear regression (SLR), multiple linear regression (MLR), principal component analysis (PCA) and partial least squares (PLS) to compute the atmospheric pressure and earth tidal effects on water level. Furthermore, we have used the Akaike information criterion (AIC) to study the performance of various models. Based on the lowest AIC and sum of squares for error values, the best estimate of the effects of atmospheric pressure and earth tide on water level is found using the MLR model. However, MLR model does not provide multicollinearity between inputs, as a result the atmospheric pressure and earth tidal response coefficients fail to reflect the mechanisms associated with the groundwater level fluctuations. On the premise of solving serious multicollinearity of inputs, PLS model shows the minimum AIC value. The atmospheric pressure and earth tidal response coefficients show close response with the observation using PLS model. The atmospheric pressure and the earth tidal response coefficients are found to be sensitive to the stress-strain state using the observed data for the period 1 April-8 June 2008 of Chuan 03# well. The transient enhancement of porosity of rock mass around Chuan 03# well associated with the Wenchuan earthquake (Mw = 7.9 of 12 May 2008) that has taken its original pre-seismic level after 13 days indicates that the co-seismic sharp rise of water well could be induced by static stress change, rather than development of new fractures.

  8. Experimenting the hospital survey on patient safety culture in prevention facilities in Italy: psychometric properties.

    PubMed

    Tereanu, Carmen; Smith, Scott A; Sampietro, Giuseppe; Sarnataro, Francesco; Mazzoleni, Giuliana; Pesenti, Bruno; Sala, Luca C; Cecchetti, Roberto; Arvati, Massimo; Brioschi, Dania; Viscardi, Michela; Prati, Chiara; Barbaglio, Giorgio G

    2017-04-01

    The Agency for Healthcare Research and Quality Hospital Survey on Patient Safety Culture (HSOPS) was designed to assess staff views on patient safety culture in hospital. This study examines psychometrics of the Italian translation of the HSOPS for use in territorial prevention facilities. After minimal adjustments and pre-test of the Italian version, a qualitative cross-sectional study was carried out. Departments of Prevention (DPs) of four Local Health Authorities in Northern Italy. Census of medical and non-medical staff (n. 479). Web-based self-administered questionnaire. Descriptive statistics, internal reliability, Confirmatory Factor Analysis (CFA) and intercorrelations among survey composites. Initial CFA of the 12 patient safety culture composites and 42 items included in the original version of the questionnaire revealed that two dimensions (Staffing and Overall Perception of Patient Safety) and nine individual items did not perform well among Italian territorial Prevention staff. After dropping those composites and items, psychometric properties were acceptable (comparative fit index = 0.94; root mean square error of approximation = 0.04; standardized root mean square residual = 0.04). Internal consistency for each remaining composite met or exceeded the criterion 0.70. Intercorrelations were all statistically significant. Psychometric analyses provided overall support for 10 of the 12 initial patient safety culture composites and 33 of the 42 initial composite items. Although the original instrument was intended for US Hospitals, the Italian translation of the HSOPS adapted for use in territorial prevention facilities performed adequately in Italian DPs. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  9. Corrigendum

    NASA Astrophysics Data System (ADS)

    Faghihi, M.; Scheffel, J.

    1988-12-01

    A minor correction, having no major influence on our results, is reported here. The coefficients in the equations of state (16) and (17) should read The set of equations (13)-(20) now comprise the correct, linearized and Fourierdecomposed double adiabatic equations in cylindrical geometry. In addition, there is a printing error in (15): a factor bz should multiply the last term of the left-hand side. Our results are only slightly modified, and the discussion remains unchanged. We wish, however, to point out that the correct stability criterion for isotropic pressure, (26), should be This is the double adiabatic counterpart to the m ╪ 0 Kadomtsev criterion of ideal MHD.

  10. Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2011-01-01

    The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…

  11. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Mohamed, Heba M.

    2016-01-01

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.

  12. An Interactive Computer Package for Use with Simulation Models Which Performs Multidimensional Sensitivity Analysis by Employing the Techniques of Response Surface Methodology.

    DTIC Science & Technology

    1984-12-01

    total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni

  13. RM2: rms error comparisons

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1976-01-01

    The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.

  14. Cervicocephalic kinesthetic sensibility in young and middle-aged adults with or without a history of mild neck pain.

    PubMed

    Teng, C-C; Chai, H; Lai, D-M; Wang, S-F

    2007-02-01

    Previous research has shown that there is no significant relationship between the degree of structural degeneration of the cervical spine and neck pain. We therefore sought to investigate the potential role of sensory dysfunction in chronic neck pain. Cervicocephalic kinesthetic sensibility, expressed by how accurately an individual can reposition the head, was studied in three groups of individuals, a control group of 20 asymptomatic young adults and two groups of middle-aged adults (20 subjects in each group) with or without a history of mild neck pain. An ultrasound-based three-dimensional coordinate measuring system was used to measure the position of the head and to test the accuracy of repositioning. Constant error (indicating that the subject overshot or undershot the intended position) and root mean square errors (representing total errors of accuracy and variability) were measured during repositioning of the head to the neutral head position (Head-to-NHP) and repositioning of the head to the target (Head-to-Target) in three cardinal planes (sagittal, transverse, and frontal). Analysis of covariance (ANCOVA) was used to test the group effect, with age used as a covariate. The constant errors during repositioning from a flexed position and from an extended position to the NHP were significantly greater in the middle-aged subjects than in the control group (beta=0.30 and beta=0.60, respectively; P<0.05 for both). In addition, the root mean square errors during repositioning from a flexed or extended position to the NHP were greater in the middle-aged subjects than in the control group (beta=0.27 and beta=0.49, respectively; P<0.05 for both). The root mean square errors also increased during Head-to-Target in left rotation (beta=0.24;P<0.05), but there was no difference in the constant errors or root mean square errors during Head-to-NHP repositioning from other target positions (P>0.05). The results indicate that, after controlling for age as a covariate, there was no group effect. Thus, age appears to have a profound effect on an individual's ability to accurately reposition the head toward the neutral position in the sagittal plane and repositioning the head toward left rotation. A history of mild chronic neck pain alone had no significant effect on cervicocephalic kinesthetic sensibility.

  15. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  16. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  17. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  18. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  19. The Predictive Validity of the Minnesota Reading Assessment for Students in Postsecondary Vocational Education Programs.

    ERIC Educational Resources Information Center

    Brown, James M.; Chang, Gerald

    1982-01-01

    The predictive validity of the Minnesota Reading Assessment (MRA) when used to project potential performance of postsecondary vocational-technical education students was examined. Findings confirmed the MRA to be a valid predictor, although the error in prediction varied between the criterion variables. (Author/GK)

  20. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  1. Design of vibration isolation systems using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.

  2. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  3. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.

  4. Determining particle size and water content by near-infrared spectroscopy in the granulation of naproxen sodium.

    PubMed

    Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter

    2018-03-20

    Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. The Influence of Dimensionality on Estimation in the Partial Credit Model.

    ERIC Educational Resources Information Center

    De Ayala, R. J.

    1995-01-01

    The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…

  6. Simulated Driving Assessment (SDA) for Teen Drivers: Results from a Validation Study

    PubMed Central

    McDonald, Catherine C.; Kandadai, Venk; Loeb, Helen; Seacrist, Thomas S.; Lee, Yi-Ching; Winston, Zachary; Winston, Flaura K.

    2015-01-01

    Background Driver error and inadequate skill are common critical reasons for novice teen driver crashes, yet few validated, standardized assessments of teen driving skills exist. The purpose of this study was to evaluate the construct and criterion validity of a newly developed Simulated Driving Assessment (SDA) for novice teen drivers. Methods The SDA's 35-minute simulated drive incorporates 22 variations of the most common teen driver crash configurations. Driving performance was compared for 21 inexperienced teens (age 16–17 years, provisional license ≤90 days) and 17 experienced adults (age 25–50 years, license ≥5 years, drove ≥100 miles per week, no collisions or moving violations ≤3 years). SDA driving performance (Error Score) was based on driving safety measures derived from simulator and eye-tracking data. Negative driving outcomes included simulated collisions or run-off-the-road incidents. A professional driving evaluator/instructor reviewed videos of SDA performance (DEI Score). Results The SDA demonstrated construct validity: 1.) Teens had a higher Error Score than adults (30 vs. 13, p=0.02); 2.) For each additional error committed, the relative risk of a participant's propensity for a simulated negative driving outcome increased by 8% (95% CI: 1.05–1.10, p<0.01). The SDA demonstrated criterion validity: Error Score was correlated with DEI Score (r=−0.66, p<0.001). Conclusions This study supports the concept of validated simulated driving tests like the SDA to assess novice driver skill in complex and hazardous driving scenarios. The SDA, as a standard protocol to evaluate teen driver performance, has the potential to facilitate screening and assessment of teen driving readiness and could be used to guide targeted skill training. PMID:25740939

  7. Modeling Water Temperature in the Yakima River, Washington, from Roza Diversion Dam to Prosser Dam, 2005-06

    USGS Publications Warehouse

    Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.

    2008-01-01

    A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).

  8. The modelling of lead removal from water by deep eutectic solvents functionalized CNTs: artificial neural network (ANN) approach.

    PubMed

    Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed

    2017-11-01

    The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.

  9. Rapid Detection of Volatile Oil in Mentha haplocalyx by Near-Infrared Spectroscopy and Chemometrics.

    PubMed

    Yan, Hui; Guo, Cheng; Shao, Yang; Ouyang, Zhen

    2017-01-01

    Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . The effects of data pre-processing methods on the accuracy of the PLSR calibration models were investigated. The performance of the final model was evaluated according to the correlation coefficient ( R ) and root mean square error of prediction (RMSEP). For PLSR model, the best preprocessing method combination was first-order derivative, standard normal variate transformation (SNV), and mean centering, which had of 0.8805, of 0.8719, RMSEC of 0.091, and RMSEP of 0.097, respectively. The wave number variables linking to volatile oil are from 5500 to 4000 cm-1 by analyzing the loading weights and variable importance in projection (VIP) scores. For SVM model, six LVs (less than seven LVs in PLSR model) were adopted in model, and the result was better than PLSR model. The and were 0.9232 and 0.9202, respectively, with RMSEC and RMSEP of 0.084 and 0.082, respectively, which indicated that the predicted values were accurate and reliable. This work demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in M. haplocalyx . The quality of medicine directly links to clinical efficacy, thus, it is important to control the quality of Mentha haplocalyx . Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . For SVM model, 6 LVs (less than 7 LVs in PLSR model) were adopted in model, and the result was better than PLSR model. It demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in Mentha haplocalyx . Abbreviations used: 1 st der: First-order derivative; 2 nd der: Second-order derivative; LOO: Leave-one-out; LVs: Latent variables; MC: Mean centering, NIR: Near-infrared; NIRS: Near infrared spectroscopy; PCR: Principal component regression, PLSR: Partial least squares regression; RBF: Radial basis function; RMSEC: Root mean square error of cross validation, RMSEC: Root mean square error of calibration; RMSEP: Root mean square error of prediction; SNV: Standard normal variate transformation; SVM: Support vector machine; VIP: Variable Importance in projection.

  10. A weighted information criterion for multiple minor components and its adaptive extraction algorithms.

    PubMed

    Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an

    2017-05-01

    Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions

    NASA Astrophysics Data System (ADS)

    Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.

    2017-01-01

    Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.

  12. Unified Bohm criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kos, L.; Tskhakaya, D. D.; Jelić, N.

    2015-09-15

    Recent decades have seen research into the conditions necessary for the formation of the monotonic potential shape in the sheath, appearing at the plasma boundaries like walls, in fluid, and kinetic approximations separately. Although either of these approaches yields a formulation commonly known as the much-acclaimed Bohm criterion (BC), the respective results involve essentially different physical quantities that describe the ion gas behavior. In the fluid approach, such a quantity is clearly identified as the ion directional velocity. In the kinetic approach, the ion behavior is formulated via a quantity (the squared inverse velocity averaged by the ion distribution function)more » without any clear physical significance, which is, moreover, impractical. In the present paper, we try to explain this difference by deriving a condition called here the Unified Bohm Criterion, which combines an advanced fluid model with an upgraded explicit kinetic formula in a new form of the BC. By introducing a generalized polytropic coefficient function, the unified BC can be interpreted in a form that holds, irrespective of whether the ions are described kinetically or in the fluid approximation.« less

  13. Navy Fuel Composition and Screening Tool (FCAST) v2.8

    DTIC Science & Technology

    2016-05-10

    allowed us to develop partial least squares (PLS) models based on gas chromatography–mass spectrometry (GC-MS) data that predict fuel properties. The...Chemometric property modeling Partial least squares PLS Compositional profiler Naval Air Systems Command Air-4.4.5 Patuxent River Naval Air Station Patuxent...Cumulative predicted residual error sum of squares DiEGME Diethylene glycol monomethyl ether FCAST Fuel Composition and Screening Tool FFP Fit for

  14. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  15. Latin-square three-dimensional gage master

    DOEpatents

    Jones, L.

    1981-05-12

    A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.

  16. Latin square three dimensional gage master

    DOEpatents

    Jones, Lynn L.

    1982-01-01

    A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.

  17. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  18. VizieR Online Data Catalog: delta Cep VEGA/CHARA observing log (Nardetto+, 2016)

    NASA Astrophysics Data System (ADS)

    Nardetto, N.; Merand, A.; Mourard, D.; Storm, J.; Gieren, W.; Fouque, P.; Gallenne, A.; Graczyk, D.; Kervella, P.; Neilson, H.; Pietrzynski, G.; Pilecki, B.; Breitfelder, J.; Berio, P.; Challouf, M.; Clausse, J.-M.; Ligi, R.; Mathias, P.; Meilland, A.; Perraut, K.; Poretti, E.; Rainer, M.; Spang, A.; Stee, P.; Tallon-Bosc, I.; Ten Brummelaar, T.

    2016-07-01

    The columns give, respectively, the date, the RJD, the hour angle (HA), the minimum and maximum wavelengths over which the squared visibility is calculated, the projected baseline length Bp and its orientation PA, the signal-to-noise ratio on the fringe peak; the last column provides the calibrated squared visibility V2 together with the statistic error on V2, and the systematic error on V2 (see text for details). The data are available on the Jean-Marie Mariotti Center OiDB service (Available at http://oidb.jmmc.fr). (1 data file).

  19. A network application for modeling a centrifugal compressor performance map

    NASA Astrophysics Data System (ADS)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  20. Tissue resistivity estimation in the presence of positional and geometrical uncertainties.

    PubMed

    Baysal, U; Eyüboğlu, B M

    2000-08-01

    Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.

  1. A nonlinear model of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

  2. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  3. Ambient Ozone Exposure in Czech Forests: A GIS-Based Approach to Spatial Distribution Assessment

    PubMed Central

    Hůnová, I.; Horálek, J.; Schreiberová, M.; Zapletal, M.

    2012-01-01

    Ambient ozone (O3) is an important phytotoxic pollutant, and detailed knowledge of its spatial distribution is becoming increasingly important. The aim of the paper is to compare different spatial interpolation techniques and to recommend the best approach for producing a reliable map for O3 with respect to its phytotoxic potential. For evaluation we used real-time ambient O3 concentrations measured by UV absorbance from 24 Czech rural sites in the 2007 and 2008 vegetation seasons. We considered eleven approaches for spatial interpolation used for the development of maps for mean vegetation season O3 concentrations and the AOT40F exposure index for forests. The uncertainty of maps was assessed by cross-validation analysis. The root mean square error (RMSE) of the map was used as a criterion. Our results indicate that the optimal interpolation approach is linear regression of O3 data and altitude with subsequent interpolation of its residuals by ordinary kriging. The relative uncertainty of the map of O3 mean for the vegetation season is less than 10%, using the optimal method as for both explored years, and this is a very acceptable value. In the case of AOT40F, however, the relative uncertainty of the map is notably worse, reaching nearly 20% in both examined years. PMID:22566757

  4. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  5. Optimization control of LNG regasification plant using Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Wahid, A.; Adicandra, F. F.

    2018-03-01

    Optimization of liquified natural gas (LNG) regasification plant is important to minimize costs, especially operational costs. Therefore, it is important to choose optimum LNG regasification plant design and maintaining the optimum operating conditions through the implementation of model predictive control (MPC). Optimal tuning parameter for MPC such as P (prediction horizon), M (control of the horizon) and T (sampling time) are achieved by using fine-tuning method. The optimal criterion for design is the minimum amount of energy used and for control is integral of square error (ISE). As a result, the optimum design is scheme 2 which is developed by Devold with an energy savings of 40%. To maintain the optimum conditions, required MPC with P, M and T as follows: tank storage pressure: 90, 2, 1; product pressure: 95, 2, 1; temperature vaporizer: 65, 2, 2; and temperature heater: 35, 6, 5, with ISE value at set point tracking respectively 0.99, 1792.78, 34.89 and 7.54, or improvement of control performance respectively 4.6%, 63.5%, 3.1% and 58.2% compared to PI controller performance. The energy savings that MPC controllers can make when there is a disturbance in temperature rise 1°C of sea water is 0.02 MW.

  6. Prediction of total organic carbon content in shale reservoir based on a new integrated hybrid neural network and conventional well logging curves

    NASA Astrophysics Data System (ADS)

    Zhu, Linqi; Zhang, Chong; Zhang, Chaomo; Wei, Yang; Zhou, Xueqing; Cheng, Yuan; Huang, Yuyang; Zhang, Le

    2018-06-01

    There is increasing interest in shale gas reservoirs due to their abundant reserves. As a key evaluation criterion, the total organic carbon content (TOC) of the reservoirs can reflect its hydrocarbon generation potential. The existing TOC calculation model is not very accurate and there is still the possibility for improvement. In this paper, an integrated hybrid neural network (IHNN) model is proposed for predicting the TOC. This is based on the fact that the TOC information on the low TOC reservoir, where the TOC is easy to evaluate, comes from a prediction problem, which is the inherent problem of the existing algorithm. By comparing the prediction models established in 132 rock samples in the shale gas reservoir within the Jiaoshiba area, it can be seen that the accuracy of the proposed IHNN model is much higher than that of the other prediction models. The mean square error of the samples, which were not joined to the established models, was reduced from 0.586 to 0.442. The results show that TOC prediction is easier after logging prediction has been improved. Furthermore, this paper puts forward the next research direction of the prediction model. The IHNN algorithm can help evaluate the TOC of a shale gas reservoir.

  7. Validation of the short version of the Cognitive Emotion Regulation Questionnaire for Spanish children.

    PubMed

    Orgilés, Mireia; Morales, Alexandra; Fernández-Martínez, Iván; Melero, Silvia; Espada, José P

    2018-01-01

    This study aimed to validate a short version of the Cognitive Emotion Regulation Questionnaire for Spanish kids (CERQ-Sk) based on the 18-item version available for adults. A sample of 654 children aged 7-12 years completed the CERQ-Sk and tests for depression and anxiety measures. Confirmatory factor analysis supported the 18-item version and the original nine-factor structure, which includes self-blame, acceptance, rumination, positive refocusing, refocus on planning, positive reappraisal, putting into perspective, catastrophizing, and other-blame (comparative fit index = .99, Tucker-Lewis index = .98, root mean square error of approximation = .02). Internal consistency was adequate (ordinal α = .80), and the eight-week stability of this version was moderate (intraclass correlation = .69). Criterion validity was supported by correlations among self-blame, rumination, and catastrophizing (positive) and among positive reappraisal and depression and anxiety symptoms (negative). Results suggest that the short version of the CERQ-Sk is a valid and reliable instrument for assessing these cognitive emotion regulation strategies during the middle childhood developmental period. Clinicians and researchers will benefit from this briefer acceptable version when time is not available for the 36-item version. This study offers preliminary results for the first short version of the CERQ for children.

  8. Wavefront Control Toolbox for James Webb Space Telescope Testbed

    NASA Technical Reports Server (NTRS)

    Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin

    2007-01-01

    We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.

  9. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  10. Methods of automatic nucleotide-sequence analysis. Multicomponent spectrophotometric analysis of mixtures of nucleic acid components by a least-squares procedure

    PubMed Central

    Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.

    1965-01-01

    1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087

  11. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  12. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol.

    PubMed

    Yehia, Ali M; Mohamed, Heba M

    2016-01-05

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Magnotti, Gaetano

    2010-01-01

    The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.

  14. Nondestructive quantification of the soluble-solids content and the available acidity of apples by Fourier-transform near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Ying, Yibin; Liu, Yande; Tao, Yang

    2005-09-01

    This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r) 0.940 for the SSC and a moderate r of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSC and also showed the feasibility of using it to predict the VA of apples.

  15. Rovibrational spectra of ammonia. I. Unprecedented accuracy of a potential energy surface used with nonadiabatic corrections.

    PubMed

    Huang, Xinchuan; Schwenke, David W; Lee, Timothy J

    2011-01-28

    In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.

  16. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  17. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Marini, J.

    1979-01-01

    The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.

  18. Forecast of dengue incidence using temperature and rainfall.

    PubMed

    Hii, Yien Ling; Zhu, Huaiping; Ng, Nawi; Ng, Lee Ching; Rocklöv, Joacim

    2012-01-01

    An accurate early warning system to predict impending epidemics enhances the effectiveness of preventive measures against dengue fever. The aim of this study was to develop and validate a forecasting model that could predict dengue cases and provide timely early warning in Singapore. We developed a time series Poisson multivariate regression model using weekly mean temperature and cumulative rainfall over the period 2000-2010. Weather data were modeled using piecewise linear spline functions. We analyzed various lag times between dengue and weather variables to identify the optimal dengue forecasting period. Autoregression, seasonality and trend were considered in the model. We validated the model by forecasting dengue cases for week 1 of 2011 up to week 16 of 2012 using weather data alone. Model selection and validation were based on Akaike's Information Criterion, standardized Root Mean Square Error, and residuals diagnoses. A Receiver Operating Characteristics curve was used to analyze the sensitivity of the forecast of epidemics. The optimal period for dengue forecast was 16 weeks. Our model forecasted correctly with errors of 0.3 and 0.32 of the standard deviation of reported cases during the model training and validation periods, respectively. It was sensitive enough to distinguish between outbreak and non-outbreak to a 96% (CI = 93-98%) in 2004-2010 and 98% (CI = 95%-100%) in 2011. The model predicted the outbreak in 2011 accurately with less than 3% possibility of false alarm. We have developed a weather-based dengue forecasting model that allows warning 16 weeks in advance of dengue epidemics with high sensitivity and specificity. We demonstrate that models using temperature and rainfall could be simple, precise, and low cost tools for dengue forecasting which could be used to enhance decision making on the timing, scale of vector control operations, and utilization of limited resources.

  19. System Design for Nano-Network Communications

    NASA Astrophysics Data System (ADS)

    ShahMohammadian, Hoda

    The potential applications of nanotechnology in a wide range of areas necessities nano-networking research. Nano-networking is a new type of networking which has emerged by applying nanotechnology to communication theory. Therefore, this dissertation presents a framework for physical layer communications in a nano-network and addresses some of the pressing unsolved challenges in designing a molecular communication system. The contribution of this dissertation is proposing well-justified models for signal propagation, noise sources, optimum receiver design and synchronization in molecular communication channels. The design of any communication system is primarily based on the signal propagation channel and noise models. Using the Brownian motion and advection molecular statistics, separate signal propagation and noise models are presented for diffusion-based and flow-based molecular communication channels. It is shown that the corrupting noise of molecular channels is uncorrelated and non-stationary with a signal dependent magnitude. The next key component of any communication system is the reception and detection process. This dissertation provides a detailed analysis of the effect of the ligand-receptor binding mechanism on the received signal, and develops the first optimal receiver design for molecular communications. The bit error rate performance of the proposed receiver is evaluated and the impact of medium motion on the receiver performance is investigated. Another important feature of any communication system is synchronization. In this dissertation, the first blind synchronization algorithm is presented for the molecular communication channels. The proposed algorithm uses a non-decision directed maximum likelihood criterion for estimating the channel delay. The Cramer-Rao lower bound is also derived and the performance of the proposed synchronization algorithm is evaluated by investigating its mean square error.

  20. Accuracy and optimal timing of activity measurements in estimating the absorbed dose of radioiodine in the treatment of Graves' disease

    NASA Astrophysics Data System (ADS)

    Merrill, S.; Horowitz, J.; Traino, A. C.; Chipkin, S. R.; Hollot, C. V.; Chait, Y.

    2011-02-01

    Calculation of the therapeutic activity of radioiodine 131I for individualized dosimetry in the treatment of Graves' disease requires an accurate estimate of the thyroid absorbed radiation dose based on a tracer activity administration of 131I. Common approaches (Marinelli-Quimby formula, MIRD algorithm) use, respectively, the effective half-life of radioiodine in the thyroid and the time-integrated activity. Many physicians perform one, two, or at most three tracer dose activity measurements at various times and calculate the required therapeutic activity by ad hoc methods. In this paper, we study the accuracy of estimates of four 'target variables': time-integrated activity coefficient, time of maximum activity, maximum activity, and effective half-life in the gland. Clinical data from 41 patients who underwent 131I therapy for Graves' disease at the University Hospital in Pisa, Italy, are used for analysis. The radioiodine kinetics are described using a nonlinear mixed-effects model. The distributions of the target variables in the patient population are characterized. Using minimum root mean squared error as the criterion, optimal 1-, 2-, and 3-point sampling schedules are determined for estimation of the target variables, and probabilistic bounds are given for the errors under the optimal times. An algorithm is developed for computing the optimal 1-, 2-, and 3-point sampling schedules for the target variables. This algorithm is implemented in a freely available software tool. Taking into consideration 131I effective half-life in the thyroid and measurement noise, the optimal 1-point time for time-integrated activity coefficient is a measurement 1 week following the tracer dose. Additional measurements give only a slight improvement in accuracy.

  1. The Effects of Longitudinal Control-System Dynamics on Pilot Opinion and Response Characteristics as Determined from Flight Tests and from Ground Simulator Studies

    NASA Technical Reports Server (NTRS)

    Sadoff, Melvin

    1958-01-01

    The results of a fixed-base simulator study of the effects of variable longitudinal control-system dynamics on pilot opinion are presented and compared with flight-test data. The control-system variables considered in this investigation included stick force per g, time constant, and dead-band, or stabilizer breakout force. In general, the fairly good correlation between flight and simulator results for two pilots demonstrates the validity of fixed-base simulator studies which are designed to complement and supplement flight studies and serve as a guide in control-system preliminary design. However, in the investigation of certain problem areas (e.g., sensitive control-system configurations associated with pilot- induced oscillations in flight), fixed-base simulator results did not predict the occurrence of an instability, although the pilots noted the system was extremely sensitive and unsatisfactory. If it is desired to predict pilot-induced-oscillation tendencies, tests in moving-base simulators may be required. It was found possible to represent the human pilot by a linear pilot analog for the tracking task assumed in the present study. The criterion used to adjust the pilot analog was the root-mean-square tracking error of one of the human pilots on the fixed-base simulator. Matching the tracking error of the pilot analog to that of the human pilot gave an approximation to the variation of human-pilot behavior over a range of control-system dynamics. Results of the pilot-analog study indicated that both for optimized control-system dynamics (for poor airplane dynamics) and for a region of good airplane dynamics, the pilot response characteristics are approximately the same.

  2. Comparison of random regression models with Legendre polynomials and linear splines for production traits and somatic cell score of Canadian Holstein cows.

    PubMed

    Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G

    2008-09-01

    A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.

  3. Using factorial experimental design to evaluate the separation of plastics by froth flotation.

    PubMed

    Salerno, Davide; Jordão, Helga; La Marca, Floriana; Carvalho, M Teresa

    2018-03-01

    This paper proposes the use of factorial experimental design as a standard experimental method in the application of froth flotation to plastic separation instead of the commonly used OVAT method (manipulation of one variable at a time). Furthermore, as is common practice in minerals flotation, the parameters of the kinetic model were used as process responses rather than the recovery of plastics in the separation products. To explain and illustrate the proposed methodology, a set of 32 experimental tests was performed using mixtures of two polymers with approximately the same density, PVC and PS (with mineral charges), with particle size ranging from 2 to 4 mm. The manipulated variables were frother concentration, air flow rate and pH. A three-level full factorial design was conducted. The models establishing the relationships between the manipulated variables and their interactions with the responses (first order kinetic model parameters) were built. The Corrected Akaike Information Criterion was used to select the best fit model and an analysis of variance (ANOVA) was conducted to identify the statistically significant terms of the model. It was shown that froth flotation can be used to efficiently separate PVC from PS with mineral charges by reducing the floatability of PVC, which largely depends on the action of pH. Within the tested interval, this is the factor that most affects the flotation rate constants. The results obtained show that the pure error may be of the same magnitude as the sum of squares of the errors, suggesting that there is significant variability within the same experimental conditions. Thus, special care is needed when evaluating and generalizing the process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A hybrid artificial neural network and particle swarm optimization for prediction of removal of hazardous dye brilliant green from aqueous solution using zinc sulfide nanoparticle loaded on activated carbon.

    PubMed

    Ghaedi, M; Ansari, A; Bahari, F; Ghaedi, A M; Vafaei, A

    2015-02-25

    In the present study, zinc sulfide nanoparticle loaded on activated carbon (ZnS-NP-AC) simply was synthesized in the presence of ultrasound and characterized using different techniques such as SEM and BET analysis. Then, this material was used for brilliant green (BG) removal. To dependency of BG removal percentage toward various parameters including pH, adsorbent dosage, initial dye concentration and contact time were examined and optimized. The mechanism and rate of adsorption was ascertained by analyzing experimental data at various time to conventional kinetic models such as pseudo-first-order and second order, Elovich and intra-particle diffusion models. Comparison according to general criterion such as relative error in adsorption capacity and correlation coefficient confirm the usability of pseudo-second-order kinetic model for explanation of data. The Langmuir models is efficiently can explained the behavior of adsorption system to give full information about interaction of BG with ZnS-NP-AC. A multiple linear regression (MLR) and a hybrid of artificial neural network and partial swarm optimization (ANN-PSO) model were used for prediction of brilliant green adsorption onto ZnS-NP-AC. Comparison of the results obtained using offered models confirm higher ability of ANN model compare to the MLR model for prediction of BG adsorption onto ZnS-NP-AC. Using the optimal ANN-PSO model the coefficient of determination (R(2)) were 0.9610 and 0.9506; mean squared error (MSE) values were 0.0020 and 0.0022 for the training and testing data set, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A hybrid artificial neural network and particle swarm optimization for prediction of removal of hazardous dye brilliant green from aqueous solution using zinc sulfide nanoparticle loaded on activated carbon

    NASA Astrophysics Data System (ADS)

    Ghaedi, M.; Ansari, A.; Bahari, F.; Ghaedi, A. M.; Vafaei, A.

    2015-02-01

    In the present study, zinc sulfide nanoparticle loaded on activated carbon (ZnS-NP-AC) simply was synthesized in the presence of ultrasound and characterized using different techniques such as SEM and BET analysis. Then, this material was used for brilliant green (BG) removal. To dependency of BG removal percentage toward various parameters including pH, adsorbent dosage, initial dye concentration and contact time were examined and optimized. The mechanism and rate of adsorption was ascertained by analyzing experimental data at various time to conventional kinetic models such as pseudo-first-order and second order, Elovich and intra-particle diffusion models. Comparison according to general criterion such as relative error in adsorption capacity and correlation coefficient confirm the usability of pseudo-second-order kinetic model for explanation of data. The Langmuir models is efficiently can explained the behavior of adsorption system to give full information about interaction of BG with ZnS-NP-AC. A multiple linear regression (MLR) and a hybrid of artificial neural network and partial swarm optimization (ANN-PSO) model were used for prediction of brilliant green adsorption onto ZnS-NP-AC. Comparison of the results obtained using offered models confirm higher ability of ANN model compare to the MLR model for prediction of BG adsorption onto ZnS-NP-AC. Using the optimal ANN-PSO model the coefficient of determination (R2) were 0.9610 and 0.9506; mean squared error (MSE) values were 0.0020 and 0.0022 for the training and testing data set, respectively.

  6. Red Wine Age Estimation by the Alteration of Its Color Parameters: Fourier Transform Infrared Spectroscopy as a Tool to Monitor Wine Maturation Time

    PubMed Central

    Basalekou, M.; Pappas, C.; Kotseridis, Y.; Tarantilis, P. A.; Kontaxakis, E.

    2017-01-01

    Color, phenolic content, and chemical age values of red wines made from Cretan grape varieties (Kotsifali, Mandilari) were evaluated over nine months of maturation in different containers for two vintages. The wines differed greatly on their anthocyanin profiles. Mid-IR spectra were also recorded with the use of a Fourier Transform Infrared Spectrophotometer in ZnSe disk mode. Analysis of Variance was used to explore the parameter's dependency on time. Determination models were developed for the chemical age indexes using Partial Least Squares (PLS) (TQ Analyst software) considering the spectral region 1830–1500 cm−1. The correlation coefficients (r) for chemical age index i were 0.86 for Kotsifali (Root Mean Square Error of Calibration (RMSEC) = 0.067, Root Mean Square Error of Prediction (RMSEP) = 0,115, and Root Mean Square Error of Validation (RMSECV) = 0.164) and 0.90 for Mandilari (RMSEC = 0.050, RMSEP = 0.040, and RMSECV = 0.089). For chemical age index ii the correlation coefficients (r) were 0.86 and 0.97 for Kotsifali (RMSEC 0.044, RMSEP = 0.087, and RMSECV = 0.214) and Mandilari (RMSEC = 0.024, RMSEP = 0.033, and RMSECV = 0.078), respectively. The proposed method is simpler, less time consuming, and more economical and does not require chemical reagents. PMID:29225994

  7. Application of near-infrared spectroscopy for the rapid quality assessment of Radix Paeoniae Rubra

    NASA Astrophysics Data System (ADS)

    Zhan, Hao; Fang, Jing; Tang, Liying; Yang, Hongjun; Li, Hua; Wang, Zhuju; Yang, Bin; Wu, Hongwei; Fu, Meihong

    2017-08-01

    Near-infrared (NIR) spectroscopy with multivariate analysis was used to quantify gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra, and the feasibility to classify the samples originating from different areas was investigated. A new high-performance liquid chromatography method was developed and validated to analyze gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra as the reference. Partial least squares (PLS), principal component regression (PCR), and stepwise multivariate linear regression (SMLR) were performed to calibrate the regression model. Different data pretreatments such as derivatives (1st and 2nd), multiplicative scatter correction, standard normal variate, Savitzky-Golay filter, and Norris derivative filter were applied to remove the systematic errors. The performance of the model was evaluated according to the root mean square of calibration (RMSEC), root mean square error of prediction (RMSEP), root mean square error of cross-validation (RMSECV), and correlation coefficient (r). The results show that compared to PCR and SMLR, PLS had a lower RMSEC, RMSECV, and RMSEP and higher r for all the four analytes. PLS coupled with proper pretreatments showed good performance in both the fitting and predicting results. Furthermore, the original areas of Radix Paeoniae Rubra samples were partly distinguished by principal component analysis. This study shows that NIR with PLS is a reliable, inexpensive, and rapid tool for the quality assessment of Radix Paeoniae Rubra.

  8. Quick method (FT-NIR) for the determination of oil and major fatty acids content in whole achenes of milk thistle (Silybum marianum (L.) Gaertn.).

    PubMed

    Koláčková, Pavla; Růžičková, Gabriela; Gregor, Tomáš; Šišperová, Eliška

    2015-08-30

    Calibration models for the Fourier transform-near infrared (FT-NIR) instrument were developed for quick and non-destructive determination of oil and fatty acids in whole achenes of milk thistle. Samples with a range of oil and fatty acid levels were collected and their transmittance spectra were obtained by the FT-NIR instrument. Based on these spectra and data gained by the means of the reference method - Soxhlet extraction and gas chromatography (GC) - calibration models were created by means of partial least square (PLS) regression analysis. Precision and accuracy of the calibration models was verified via the cross-validation of validation samples whose spectra were not part of the calibration model and also according to the root mean square error of prediction (RMSEP), root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV) and the validation coefficient of determination (R(2) ). R(2) for whole seeds were 0.96, 0.96, 0.83 and 0.67 and the RMSEP values were 0.76, 1.68, 1.24, 0.54 for oil, linoleic (C18:2), oleic (C18:1) and palmitic (C16:0) acids, respectively. The calibration models are appropriate for the non-destructive determination of oil and fatty acids levels in whole seeds of milk thistle. © 2014 Society of Chemical Industry.

  9. Green method by diffuse reflectance infrared spectroscopy and spectral region selection for the quantification of sulphamethoxazole and trimethoprim in pharmaceutical formulations.

    PubMed

    da Silva, Fabiana E B; Flores, Érico M M; Parisotto, Graciele; Müller, Edson I; Ferrão, Marco F

    2016-03-01

    An alternative method for the quantification of sulphametoxazole (SMZ) and trimethoprim (TMP) using diffuse reflectance infrared Fourier-transform spectroscopy (DRIFTS) and partial least square regression (PLS) was developed. Interval Partial Least Square (iPLS) and Synergy Partial Least Square (siPLS) were applied to select a spectral range that provided the lowest prediction error in comparison to the full-spectrum model. Fifteen commercial tablet formulations and forty-nine synthetic samples were used. The ranges of concentration considered were 400 to 900 mg g-1SMZ and 80 to 240 mg g-1 TMP. Spectral data were recorded between 600 and 4000 cm-1 with a 4 cm-1 resolution by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The proposed procedure was compared to high performance liquid chromatography (HPLC). The results obtained from the root mean square error of prediction (RMSEP), during the validation of the models for samples of sulphamethoxazole (SMZ) and trimethoprim (TMP) using siPLS, demonstrate that this approach is a valid technique for use in quantitative analysis of pharmaceutical formulations. The selected interval algorithm allowed building regression models with minor errors when compared to the full spectrum PLS model. A RMSEP of 13.03 mg g-1for SMZ and 4.88 mg g-1 for TMP was obtained after the selection the best spectral regions by siPLS.

  10. New method for propagating the square root covariance matrix in triangular form. [using Kalman-Bucy filter

    NASA Technical Reports Server (NTRS)

    Choe, C. Y.; Tapley, B. D.

    1975-01-01

    A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.

  11. A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Cheevatanarak, Suchittra

    Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…

  12. Phase modulation for reduced vibration sensitivity in laser-cooled clocks in space

    NASA Technical Reports Server (NTRS)

    Klipstein, W.; Dick, G.; Jefferts, S.; Walls, F.

    2001-01-01

    The standard interrogation technique in atomic beam clocks is square-wave frequency modulation (SWFM), which suffers a first order sensitivity to vibrations as changes in the transit time of the atoms translates to perceived frequency errors. Square-wave phase modulation (SWPM) interrogation eliminates sensitivity to this noise.

  13. An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models

    ERIC Educational Resources Information Center

    Prindle, John J.; McArdle, John J.

    2012-01-01

    This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…

  14. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  15. Evidence for Response Bias as a Source of Error Variance in Applied Assessment

    ERIC Educational Resources Information Center

    McGrath, Robert E.; Mitchell, Matthew; Kim, Brian H.; Hough, Leaetta

    2010-01-01

    After 100 years of discussion, response bias remains a controversial topic in psychological measurement. The use of bias indicators in applied assessment is predicated on the assumptions that (a) response bias suppresses or moderates the criterion-related validity of substantive psychological indicators and (b) bias indicators are capable of…

  16. A Criterion to Evaluate the Individual Raw-to-Scale Equating Conversions. Research Report. ETS RR-13-05

    ERIC Educational Resources Information Center

    Guo, Hongwen; Puhan, Gautam; Walker, Michael

    2013-01-01

    In this study we investigated when an equating conversion line is problematic in terms of gaps and clumps. We suggest using the conditional standard error of measurement (CSEM) to measure the scale scores that are inappropriate in the overall raw-to-scale transformation.

  17. Embedded feature ranking for ensemble MLP classifiers.

    PubMed

    Windeatt, Terry; Duangsoithong, Rakkrit; Smith, Raymond

    2011-06-01

    A feature ranking scheme for multilayer perceptron (MLP) ensembles is proposed, along with a stopping criterion based upon the out-of-bootstrap estimate. To solve multi-class problems feature ranking is combined with modified error-correcting output coding. Experimental results on benchmark data demonstrate the versatility of the MLP base classifier in removing irrelevant features.

  18. Using Repeated Reading to Improve Reading Speed and Comprehension in Students with Visual Impairments

    ERIC Educational Resources Information Center

    Savaiano, Mackenzie E.; Hatton, Deborah D.

    2013-01-01

    Introduction: This study evaluated whether children with visual impairments who receive repeated reading instruction exhibit an increase in their oral reading rate and comprehension and a decrease in oral reading error rates. Methods: A single-subject, changing-criterion design replicated across three participants was used to demonstrate the…

  19. BMDS: A Collection of R Functions for Bayesian Multidimensional Scaling

    ERIC Educational Resources Information Center

    Okada, Kensuke; Shigemasu, Kazuo

    2009-01-01

    Bayesian multidimensional scaling (MDS) has attracted a great deal of attention because: (1) it provides a better fit than do classical MDS and ALSCAL; (2) it provides estimation errors of the distances; and (3) the Bayesian dimension selection criterion, MDSIC, provides a direct indication of optimal dimensionality. However, Bayesian MDS is not…

  20. Can Passive Touch Be Better than Active Touch? A Comparison of Active and Passive Tactile Maze Learning.

    ERIC Educational Resources Information Center

    Richardson, Barry L.; And Others

    1981-01-01

    In a comparison of the performance of active and passive mechanically yoked subjects who learned their way through a tactile maze, it was shown that active subjects made more errors and took a greater number of trials to reach criterion than did passive subjects. (Author)

  1. Fisher's method of combining dependent statistics using generalizations of the gamma distribution with applications to genetic pleiotropic associations.

    PubMed

    Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang

    2014-04-01

    A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.

  2. Form and Objective of the Decision Rule in Absolute Identification

    NASA Technical Reports Server (NTRS)

    Balakrishnan, J. D.

    1997-01-01

    In several conditions of a line length identification experiment, the subjects' decision making strategies were systematically biased against the responses on the edges of the stimulus range. When the range and number of the stimuli were small, the bias caused the percentage of correct responses to be highest in the center and lowest on the extremes of the range. Two general classes of decision rules that would explain these results are considered. The first class assumes that subjects intend to adopt an optimal decision rule, but systematically misrepresent one or more parameters of the decision making context. The second class assumes that subjects use a different measure of performance than the one assumed by the experimenter: instead of maximizing the chances of a correct response, the subject attempts to minimize the expected size of the response error (a "fidelity criterion"). In a second experiment, extended experience and feedback did not diminish the bias effect, but explicitly penalizing all response errors equally, regardless of their size, did reduce or eliminate it in some subjects. Both results favor the fidelity criterion over the optimal rule.

  3. Detecting adverse events in surgery: comparing events detected by the Veterans Health Administration Surgical Quality Improvement Program and the Patient Safety Indicators.

    PubMed

    Mull, Hillary J; Borzecki, Ann M; Loveland, Susan; Hickson, Kathleen; Chen, Qi; MacDonald, Sally; Shin, Marlena H; Cevasco, Marisa; Itani, Kamal M F; Rosen, Amy K

    2014-04-01

    The Patient Safety Indicators (PSIs) use administrative data to screen for select adverse events (AEs). In this study, VA Surgical Quality Improvement Program (VASQIP) chart review data were used as the gold standard to measure the criterion validity of 5 surgical PSIs. Independent chart review was also used to determine reasons for PSI errors. The sensitivity, specificity, and positive predictive value of PSI software version 4.1a were calculated among Veterans Health Administration hospitalizations (2003-2007) reviewed by VASQIP (n = 268,771). Nurses re-reviewed a sample of hospitalizations for which PSI and VASQIP AE detection disagreed. Sensitivities ranged from 31% to 68%, specificities from 99.1% to 99.8%, and positive predictive values from 31% to 72%. Reviewers found that coding errors accounted for some PSI-VASQIP disagreement; some disagreement was also the result of differences in AE definitions. These results suggest that the PSIs have moderate criterion validity; however, some surgical PSIs detect different AEs than VASQIP. Future research should explore using both methods to evaluate surgical quality. Published by Elsevier Inc.

  4. Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation

    NASA Astrophysics Data System (ADS)

    Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.

    2017-10-01

    A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.

  5. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    PubMed

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  6. Social influences on adaptive criterion learning.

    PubMed

    Cassidy, Brittany S; Dubé, Chad; Gutchess, Angela H

    2015-07-01

    People adaptively shift decision criteria when given biased feedback encouraging specific types of errors. Given that work on this topic has been conducted in nonsocial contexts, we extended the literature by examining adaptive criterion learning in both social and nonsocial contexts. Specifically, we compared potential differences in criterion shifting given performance feedback from social sources varying in reliability and from a nonsocial source. Participants became lax when given false positive feedback for false alarms, and became conservative when given false positive feedback for misses, replicating prior work. In terms of a social influence on adaptive criterion learning, people became more lax in response style over time if feedback was provided by a nonsocial source or by a social source meant to be perceived as unreliable and low-achieving. In contrast, people adopted a more conservative response style over time if performance feedback came from a high-achieving and reliable source. Awareness that a reliable and high-achieving person had not provided their feedback reduced the tendency to become more conservative, relative to those unaware of the source manipulation. Because teaching and learning often occur in a social context, these findings may have important implications for many scenarios in which people fine-tune their behaviors, given cues from others.

  7. Understanding Scaling Relations in Fracture and Mechanical Deformation of Single Crystal and Polycrystalline Silicon by Performing Atomistic Simulations at Mesoscale

    DTIC Science & Technology

    2009-07-16

    0.25 0.26 -0.85 1 SSR SSE R SSTO SSTO = = − 2 2 ˆ( ) : Regression sum of square, ˆwhere : mean value, : value from the fitted line ˆ...Error sum of square : Total sum of square i i i i SSR Y Y Y Y SSE Y Y SSTO SSE SSR = − = − = + ∑ ∑ Statistical analysis: Coefficient of correlation

  8. Synthesis of hover autopilots for rotary-wing VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hall, W. E.; Bryson, A. E., Jr.

    1972-01-01

    The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.

  9. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  10. Robustness study of the pseudo open-loop controller for multiconjugate adaptive optics.

    PubMed

    Piatrou, Piotr; Gilles, Luc

    2005-02-20

    Robustness of the recently proposed "pseudo open-loop control" algorithm against various system errors has been investigated for the representative example of the Gemini-South 8-m telescope multiconjugate adaptive-optics system. The existing model to represent the adaptive-optics system with pseudo open-loop control has been modified to account for misalignments, noise and calibration errors in deformable mirrors, and wave-front sensors. Comparison with the conventional least-squares control model has been done. We show with the aid of both transfer-function pole-placement analysis and Monte Carlo simulations that POLC remains remarkably stable and robust against very large levels of system errors and outperforms in this respect least-squares control. Approximate stability margins as well as performance metrics such as Strehl ratios and rms wave-front residuals averaged over a 1-arc min field of view have been computed for different types and levels of system errors to quantify the expected performance degradation.

  11. A model of the human supervisor

    NASA Technical Reports Server (NTRS)

    Kok, J. J.; Vanwijk, R. A.

    1977-01-01

    A general model of the human supervisor's behavior is given. Submechanisms of the model include: the observer/reconstructor; decision-making; and controller. A set of hypothesis is postulated for the relations between the task variables and the parameters of the different submechanisms of the model. Verification of the model hypotheses is considered using variations in the task variables. An approach is suggested for the identification of the model parameters which makes use of a multidimensional error criterion. Each of the elements of this multidimensional criterion corresponds to a certain aspect of the supervisor's behavior, and is directly related to a particular part of the model and its parameters. This approach offers good possibilities for an efficient parameter adjustment procedure.

  12. The performance of trellis coded multilevel DPSK on a fading mobile satellite channel

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1987-01-01

    The performance of trellis coded multilevel differential phase-shift-keying (MDPSK) over Rician and Rayleigh fading channels is discussed. For operation at L-Band, this signalling technique leads to a more robust system than the coherent system with dual pilot tone calibration previously proposed for UHF. The results are obtained using a combination of analysis and simulation. The analysis shows that the design criterion for trellis codes to be operated on fading channels with interleaving/deinterleaving is no longer free Euclidean distance. The correct design criterion for optimizing bit error probability of trellis coded MDPSK over fading channels will be presented along with examples illustrating its application.

  13. A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields

    NASA Astrophysics Data System (ADS)

    Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang

    2017-03-01

    Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.

  14. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    NASA Astrophysics Data System (ADS)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  15. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  16. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  17. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  18. Effect of ancilla's structure on quantum error correction using the seven-qubit Calderbank-Shor-Steane code

    NASA Astrophysics Data System (ADS)

    Salas, P. J.; Sanz, A. L.

    2004-05-01

    In this work we discuss the ability of different types of ancillas to control the decoherence of a qubit interacting with an environment. The error is introduced into the numerical simulation via a depolarizing isotropic channel. The ranges of values considered are 10-4 ⩽ɛ⩽ 10-2 for memory errors and 3× 10-5 ⩽γ/7⩽ 10-2 for gate errors. After the correction we calculate the fidelity as a quality criterion for the qubit recovered. We observe that a recovery method with a three-qubit ancilla provides reasonably good results bearing in mind its economy. If we want to go further, we have to use fault tolerant ancillas with a high degree of parallelism, even if this condition implies introducing additional ancilla verification qubits.

  19. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  20. Clinical anthropometrics and body composition from 3D whole-body surface scans

    PubMed Central

    Ng, BK; Hinton, BJ; Fan, B; Kanaya, AM; Shepherd, JA

    2017-01-01

    BACKGROUND/OBJECTIVES Obesity is a significant worldwide epidemic that necessitates accessible tools for robust body composition analysis. We investigated whether widely available 3D body surface scanners can provide clinically relevant direct anthropometrics (circumferences, areas and volumes) and body composition estimates (regional fat/lean masses). SUBJECTS/METHODS Thirty-nine healthy adults stratified by age, sex and body mass index (BMI) underwent whole-body 3D scans, dual energy X-ray absorptiometry (DXA), air displacement plethysmography and tape measurements. Linear regressions were performed to assess agreement between 3D measurements and criterion methods. Linear models were derived to predict DXA body composition from 3D scan measurements. Thirty-seven external fitness center users underwent 3D scans and bioelectrical impedance analysis for model validation. RESULTS 3D body scan measurements correlated strongly to criterion methods: waist circumference R2 = 0.95, hip circumference R2 = 0.92, surface area R2 = 0.97 and volume R2 = 0.99. However, systematic differences were observed for each measure due to discrepancies in landmark positioning. Predictive body composition equations showed strong agreement for whole body (fat mass R2 = 0.95, root mean square error (RMSE) = 2.4 kg; fat-free mass R2 = 0.96, RMSE = 2.2 kg) and arms, legs and trunk (R2 = 0.79–0.94, RMSE = 0.5–1.7 kg). Visceral fat prediction showed moderate agreement (R2 = 0.75, RMSE = 0.11 kg). CONCLUSIONS 3D surface scanners offer precise and stable automated measurements of body shape and composition. Software updates may be needed to resolve measurement biases resulting from landmark positioning discrepancies. Further studies are justified to elucidate relationships between body shape, composition and metabolic health across sex, age, BMI and ethnicity groups, as well as in those with metabolic disorders. PMID:27329614

  1. Modeling the pressure inactivation of Escherichia coli and Salmonella typhimurium in sapote mamey ( Pouteria sapota (Jacq.) H.E. Moore & Stearn) pulp.

    PubMed

    Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto

    2018-03-01

    High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj  > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.

  2. Estimating catchment scale groundwater dynamics from recession analysis - enhanced constraining of hydrological models

    NASA Astrophysics Data System (ADS)

    Skaugen, T.; Mengistu, Z.

    2015-10-01

    In this study we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady-state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff and hence estimated prior to calibration. Key principles guiding the evaluation of the new subsurface storage routine have been (a) to minimize the number of parameters to be estimated through the, often arbitrary fitting to optimize runoff predictions (calibration) and (b) maximize the range of testing conditions (i.e. large-sample hydrology). The new storage routine has been implemented in the already parameter parsimonious Distance Distribution Dynamics (DDD) model and tested for 73 catchments in Norway of varying size, mean elevations and landscape types. Runoff simulations for the 73 catchments from two model structures; DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage were compared. No loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe Efficiency criterion of 0.68 was found using the new estimated storage routine compared with 0.66 using calibrated storage routine. The average Kling-Gupta Efficiency criterion was 0.69 and 0.70 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recessions was reduced by almost 50 % using the new storage routine.

  3. Validity of the posttraumatic stress disorders (PTSD) checklist in pregnant women.

    PubMed

    Gelaye, Bizu; Zheng, Yinnan; Medina-Mora, Maria Elena; Rondon, Marta B; Sánchez, Sixto E; Williams, Michelle A

    2017-05-12

    The PTSD Checklist-civilian (PCL-C) is one of the most commonly used self-report measures of PTSD symptoms, however, little is known about its validity when used in pregnancy. This study aims to evaluate the reliability and validity of the PCL-C as a screen for detecting PTSD symptoms among pregnant women. A total of 3372 pregnant women who attended their first prenatal care visit in Lima, Peru participated in the study. We assessed the reliability of the PCL-C items using Cronbach's alpha. Criterion validity and performance characteristics of PCL-C were assessed against an independent, blinded Clinician-Administered PTSD Scale (CAPS) interview using measures of sensitivity, specificity and receiver operating characteristics (ROC) curves. We tested construct validity using exploratory and confirmatory factor analytic approaches. The reliability of the PCL-C was excellent (Cronbach's alpha =0.90). ROC analysis showed that a cut-off score of 26 offered optimal discriminatory power, with a sensitivity of 0.86 (95% CI: 0.78-0.92) and a specificity of 0.63 (95% CI: 0.62-0.65). The area under the ROC curve was 0.75 (95% CI: 0.71-0.78). A three-factor solution was extracted using exploratory factor analysis and was further complemented with three other models using confirmatory factor analysis (CFA). In a CFA, a three-factor model based on DSM-IV symptom structure had reasonable fit statistics with comparative fit index of 0.86 and root mean square error of approximation of 0.09. The Spanish-language version of the PCL-C may be used as a screening tool for pregnant women. The PCL-C has good reliability, criterion validity and factorial validity. The optimal cut-off score obtained by maximizing the sensitivity and specificity should be considered cautiously; women who screened positive may require further investigation to confirm PTSD diagnosis.

  4. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  5. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  6. Functional Generalized Structured Component Analysis.

    PubMed

    Suk, Hye Won; Hwang, Heungsun

    2016-12-01

    An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.

  7. Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas

    USGS Publications Warehouse

    Lindgren, R.J.

    2006-01-01

    A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.

  8. Positivity, discontinuity, finite resources, and nonzero error for arbitrarily varying quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de; Nötzel, J., E-mail: boche@tum.de, E-mail: janis.noetzel@tum.de

    2014-12-15

    This work is motivated by a quite general question: Under which circumstances are the capacities of information transmission systems continuous? The research is explicitly carried out on finite arbitrarily varying quantum channels (AVQCs). We give an explicit example that answers the recent question whether the transmission of messages over AVQCs can benefit from assistance by distribution of randomness between the legitimate sender and receiver in the affirmative. The specific class of channels introduced in that example is then extended to show that the unassisted capacity does have discontinuity points, while it is known that the randomness-assisted capacity is always continuousmore » in the channel. We characterize the discontinuity points and prove that the unassisted capacity is always continuous around its positivity points. After having established shared randomness as an important resource, we quantify the interplay between the distribution of finite amounts of randomness between the legitimate sender and receiver, the (nonzero) probability of a decoding error with respect to the average error criterion and the number of messages that can be sent over a finite number of channel uses. We relate our results to the entanglement transmission capacities of finite AVQCs, where the role of shared randomness is not yet well understood, and give a new sufficient criterion for the entanglement transmission capacity with randomness assistance to vanish.« less

  9. Automobile ride quality experiments correlated to iso-weighted criteria

    NASA Technical Reports Server (NTRS)

    Healey, A. J.; Young, R. K.; Smith, C. C.

    1975-01-01

    As part of an overall study to evaluate the usefulness of ride quality criteria for the design of improved ground transportation systems an experiment was conducted involving subjective and objective measurement of ride vibrations found in an automobile riding over roadways of various roughness. Correlation of the results led to some very significant relationships between passenger rating and ride accelerations. The latter were collapsed using a frequency-weighted root mean square measure of the random vibration. The results suggest the form of a design criterion giving the relationship between ride vibration and acceptable automobile ride quality. Further the ride criterion is expressed in terms that relate to rides with which most people are familiar. The design of the experiment, the ride vibration data acquisition, the concept of frequency weighting and the correlations found between subjective and objective measurements are presented.

  10. Computed lateral rate and acceleration power spectral response of conventional and STOL airplanes to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Lichtenstein, J. H.

    1975-01-01

    Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.

  11. The Short Form Vaping Consequences Questionnaire: Psychometric Properties of a Measure of Vaping Expectancies for Use With Adult E-cigarette Users.

    PubMed

    Morean, Meghan E; L'Insalata, Alexa

    2017-02-01

    E-cigarettes are popular in the United States, but psychometrically sound measures of vaping beliefs and behaviors are lacking. We evaluated the psychometrics of the Short Form Vaping Consequences Questionnaire (S-VCQ), a modified version of the Short Form Smoking Consequences Questionnaire that assesses expectancies for negative consequences, positive reinforcement, negative reinforcement, and appetite/weight control associated with vaping. Adult, past-month e-cigarette users completed an anonymous survey in Fall 2015 (N = 522, 50.4% female; 71.5% white; 34.10 [SD = 9.66] years). Psychometric analyses included confirmatory factor analysis, internal consistency, measurement invariance, t tests, correlations, and test-criterion relationships with vaping outcomes. The S-VCQ evidenced a four-factor latent structure (Bentler's Comparative Fit Index = .95, Root Mean Square Error of Approximation = .05, Standardized Root Mean Square Residual = .06), and subscales evidenced internal consistency (mean α = 0.89). S-VCQ scores were scalar invariant for sex and smoking status; women reported stronger appetite/weight control than men and dual cigarette/e-cigarette users (n = 309) reported stronger negative vaping consequences and negative reinforcement than nonsmokers. Among dual users, vaping and smoking expectancies also were scalar invariant; dual users reported stronger positive reinforcement associated with vaping than smoking but stronger negative consequences, negative reinforcement, and appetite/weight control associated with smoking than vaping. Correlations indicated that vaping and smoking expectancies were related, yet distinct constructs. Univariate general linear models indicated that vaping frequency and dependence were associated with positive reinforcement (ηp2 = .02/.02), negative reinforcement (ηp2 = .02/.08), and appetite/weight control (ηp2 = .02/.02) from vaping. The S-VCQ evidences solid psychometrics as a measure of adult e-cigarette users' vaping expectancies. The current study provides evidence for the reliability and validity of the S-VCQ, the first measure of vaping expectancies that has been validated for use with adult e-cigarette users. Results indicated that the S-VCQ comprises four subscales that evidence internal consistency, scalar measurement invariance for important groups of interest, and test-criterion relationships with vaping outcomes. Researchers are encouraged to consider using this measure for assessing vaping expectancies in adult e-cigarette users. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Flow interference in a variable porosity trisonic wind tunnel.

    NASA Technical Reports Server (NTRS)

    Davis, J. W.; Graham, R. F.

    1972-01-01

    Pressure data from a 20-degree cone-cylinder in a variable porosity wind tunnel for the Mach range 0.2 to 5.0 are compared to an interference free standard in order to determine wall interference effects. Four 20-degree cone-cylinder models representing an approximate range of percent blockage from one to six were compared to curve-fits of the interference free standard at each Mach number and errors determined at each pressure tap location. The average of the absolute values of the percent error over the length of the model was determined and used as the criterion for evaluating model blockage interference effects. The results are presented in the form of the percent error as a function of model blockage and Mach number.

  13. Design of recursive digital filters having specified phase and magnitude characteristics

    NASA Technical Reports Server (NTRS)

    King, R. E.; Condon, G. W.

    1972-01-01

    A method for a computer-aided design of a class of optimum filters, having specifications in the frequency domain of both magnitude and phase, is described. The method, an extension to the work of Steiglitz, uses the Fletcher-Powell algorithm to minimize a weighted squared magnitude and phase criterion. Results using the algorithm for the design of filters having specified phase as well as specified magnitude and phase compromise are presented.

  14. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  15. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  16. Quantitative determination of additive Chlorantraniliprole in Abamectin preparation: Investigation of bootstrapping soft shrinkage approach by mid-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng

    2018-02-01

    A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.

  17. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  18. Nondestructive quantification of the soluble-solids content and the available acidity of apples by Fourier-transform near-infrared spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ying Yibin; Liu Yande; Tao Yang

    2005-09-01

    This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r{sup 2}) 0.940 for the SSC and a moderate r{sup 2} of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSCmore » and also showed the feasibility of using it to predict the VA of apples.« less

  19. Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned

    NASA Technical Reports Server (NTRS)

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.

  20. Combating speckle in SAR images - Vector filtering and sequential classification based on a multiplicative noise model

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Allebach, Jan P.

    1990-01-01

    An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.

  1. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  2. Reliable and accurate extraction of Hamaker constants from surface force measurements.

    PubMed

    Miklavcic, S J

    2018-08-15

    A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  4. [Gaussian process regression and its application in near-infrared spectroscopy analysis].

    PubMed

    Feng, Ai-Ming; Fang, Li-Min; Lin, Min

    2011-06-01

    Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.

  5. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  6. Performance of the S - [chi][squared] Statistic for Full-Information Bifactor Models

    ERIC Educational Resources Information Center

    Li, Ying; Rupp, Andre A.

    2011-01-01

    This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…

  7. F-Test Alternatives to Fisher's Exact Test and to the Chi-Square Test of Homogeneity in 2x2 Tables.

    ERIC Educational Resources Information Center

    Overall, John E.; Starbuck, Robert R.

    1983-01-01

    An alternative to Fisher's exact test and the chi-square test for homogeneity in two-by-two tables is developed. The method provides for Type I error rates which are closer to the stated alpha level than either of the alternatives. (JKS)

  8. Multilevel Modeling and Ordinary Least Squares Regression: How Comparable Are They?

    ERIC Educational Resources Information Center

    Huang, Francis L.

    2018-01-01

    Studies analyzing clustered data sets using both multilevel models (MLMs) and ordinary least squares (OLS) regression have generally concluded that resulting point estimates, but not the standard errors, are comparable with each other. However, the accuracy of the estimates of OLS models is important to consider, as several alternative techniques…

  9. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  10. Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun

    This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.

  11. Molecular radiotherapy: the NUKFIT software for calculating the time-integrated activity coefficient.

    PubMed

    Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G

    2013-10-01

    Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.

  12. The development of a combined mathematical model to forecast the incidence of hepatitis E in Shanghai, China.

    PubMed

    Ren, Hong; Li, Jian; Yuan, Zheng-An; Hu, Jia-Yu; Yu, Yan; Lu, Yi-Han

    2013-09-08

    Sporadic hepatitis E has become an important public health concern in China. Accurate forecasting of the incidence of hepatitis E is needed to better plan future medical needs. Few mathematical models can be used because hepatitis E morbidity data has both linear and nonlinear patterns. We developed a combined mathematical model using an autoregressive integrated moving average model (ARIMA) and a back propagation neural network (BPNN) to forecast the incidence of hepatitis E. The morbidity data of hepatitis E in Shanghai from 2000 to 2012 were retrieved from the China Information System for Disease Control and Prevention. The ARIMA-BPNN combined model was trained with 144 months of morbidity data from January 2000 to December 2011, validated with 12 months of data January 2012 to December 2012, and then employed to forecast hepatitis E incidence January 2013 to December 2013 in Shanghai. Residual analysis, Root Mean Square Error (RMSE), normalized Bayesian Information Criterion (BIC), and stationary R square methods were used to compare the goodness-of-fit among ARIMA models. The Bayesian regularization back-propagation algorithm was used to train the network. The mean error rate (MER) was used to assess the validity of the combined model. A total of 7,489 hepatitis E cases was reported in Shanghai from 2000 to 2012. Goodness-of-fit (stationary R2=0.531, BIC= -4.768, Ljung-Box Q statistics=15.59, P=0.482) and parameter estimates were used to determine the best-fitting model as ARIMA (0,1,1)×(0,1,1)12. Predicted morbidity values in 2012 from best-fitting ARIMA model and actual morbidity data from 2000 to 2011 were used to further construct the combined model. The MER of the ARIMA model and the ARIMA-BPNN combined model were 0.250 and 0.176, respectively. The forecasted incidence of hepatitis E in 2013 was 0.095 to 0.372 per 100,000 population. There was a seasonal variation with a peak during January-March and a nadir during August-October. Time series analysis suggested a seasonal pattern of hepatitis E morbidity in Shanghai, China. An ARIMA-BPNN combined model was used to fit the linear and nonlinear patterns of time series data, and accurately forecast hepatitis E infections.

  13. Airborne data measurement system errors reduction through state estimation and control optimization

    NASA Astrophysics Data System (ADS)

    Sebryakov, G. G.; Muzhichek, S. M.; Pavlov, V. I.; Ermolin, O. V.; Skrinnikov, A. A.

    2018-02-01

    The paper discusses the problem of airborne data measurement system errors reduction through state estimation and control optimization. The approaches are proposed based on the methods of experiment design and the theory of systems with random abrupt structure variation. The paper considers various control criteria as applied to an aircraft data measurement system. The physics of criteria is explained, the mathematical description and the sequence of steps for each criterion application is shown. The formula is given for airborne data measurement system state vector posterior estimation based for systems with structure variations.

  14. Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.

    PubMed

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.

  15. Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models

    PubMed Central

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179

  16. Implementing an Equilibrium Law Teaching Sequence for Secondary School Students to Learn Chemical Equilibrium

    ERIC Educational Resources Information Center

    Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio

    2015-01-01

    A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of…

  17. Spline curve matching with sparse knot sets

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Neil A. Clark; Philip A. Araman

    2004-01-01

    This paper presents a new curve matching method for deformable shapes using two-dimensional splines. In contrast to the residual error criterion, which is based on relative locations of corresponding knot points such that is reliable primarily for dense point sets, we use deformation energy of thin-plate-spline mapping between sparse knot points and normalized local...

  18. Cultural and Ethnic Bias in Teacher Ratings of Behavior: A Criterion-Focused Review

    ERIC Educational Resources Information Center

    Mason, Benjamin A.; Gunersel, Adalet Baris; Ney, Emilie A.

    2014-01-01

    Behavior rating scales are indirect measures of emotional and social functioning used for assessment purposes. Rater bias is systematic error that may compromise the validity of behavior rating scale scores. Teacher bias in ratings of behavior has been investigated in multiple studies, but not yet assessed in a research synthesis that focuses on…

  19. No Special K! A Signal Detection Framework for the Strategic Regulation of Memory Accuracy

    ERIC Educational Resources Information Center

    Higham, Philip A.

    2007-01-01

    Two experiments investigated criterion setting and metacognitive processes underlying the strategic regulation of accuracy on the Scholastic Aptitude Test (SAT) using Type-2 signal detection theory (SDT). In Experiment 1, report bias was manipulated by penalizing participants either 0.25 (low incentive) or 4 (high incentive) points for each error.…

  20. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

Top