Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence
2018-01-01
The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
NASA Astrophysics Data System (ADS)
Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong
2013-04-01
Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.
Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera
NASA Astrophysics Data System (ADS)
Rahman, Samiur; Ullah, Sana; Ullah, Sehat
2018-01-01
Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Neural self-tuning adaptive control of non-minimum phase system
NASA Technical Reports Server (NTRS)
Ho, Long T.; Bialasiewicz, Jan T.; Ho, Hai T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity, if not unstable, closed-loop behavior. Therefore, a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
Improving Bandwidth Utilization in a 1 Tbps Airborne MIMO Communications Downlink
2013-03-21
number of transmitters). C = log2 ∣∣∣∣∣INr + EsNtN0 HHH ∣∣∣∣∣ (2.32) In the signal to noise ratio, Es represents the total energy from all transmitters...channel matrix pseudo-inverse is computed by (2.36) [6, p. 970] 31 H+ = ( HHH )−1HH. (2.36) 2.6.5 Minimum Mean-Squared Error Detection. Minimum Mean Squared...H† = ( HHH + Nt SNR I )−1 HH . (3.14) Equation (3.14) was defined in [2] as an implementation of a MMSE equalizer, and was applied to the received
Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.
Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro
2018-05-28
In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.
NASA Technical Reports Server (NTRS)
Lin, Qian; Allebach, Jan P.
1990-01-01
An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng
2006-12-01
An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
NASA Astrophysics Data System (ADS)
Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.
2017-07-01
This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.
Adaptive control strategies for flexible robotic arm
NASA Technical Reports Server (NTRS)
Bialasiewicz, Jan T.
1993-01-01
The motivation of this research came about when a neural network direct adaptive control scheme was applied to control the tip position of a flexible robotic arm. Satisfactory control performance was not attainable due to the inherent non-minimum phase characteristics of the flexible robotic arm tip. Most of the existing neural network control algorithms are based on the direct method and exhibit very high sensitivity if not unstable closed-loop behavior. Therefore a neural self-tuning control (NSTC) algorithm is developed and applied to this problem and showed promising results. Simulation results of the NSTC scheme and the conventional self-tuning (STR) control scheme are used to examine performance factors such as control tracking mean square error, estimation mean square error, transient response, and steady state response.
NASA Astrophysics Data System (ADS)
Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi
2017-07-01
In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).
RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.
Validation of the Kp Geomagnetic Index Forecast at CCMC
NASA Astrophysics Data System (ADS)
Frechette, B. P.; Mays, M. L.
2017-12-01
The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.
Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan
2009-02-01
The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.
A comparative study of optimum and suboptimum direct-detection laser ranging receivers
NASA Technical Reports Server (NTRS)
Abshire, J. B.
1978-01-01
A summary of previously proposed receiver strategies for direct-detection laser ranging receivers is presented. Computer simulations are used to compare performance of candidate implementation strategies in the 1- to 100-photoelectron region. Under the condition of no background radiation, the maximum-likelihood and minimum mean-square error estimators were found to give the same performance for both bell-shaped and rectangular optical-pulse shapes. For signal energies greater than 100 photoelectrons, the root-mean-square range error is shown to decrease as Q to the -1/2 power for bell-shaped pulses and Q to the -1 power for rectangular pulses, where Q represents the average pulse energy. Of several receiver implementations presented, the matched-filter peak detector was found to be preferable. A similar configuration, using a constant-fraction discriminator, exhibited a signal-level dependent time bias.
Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan
2016-01-01
Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
On higher order discrete phase-locked loops.
NASA Technical Reports Server (NTRS)
Gill, G. S.; Gupta, S. C.
1972-01-01
An exact mathematical model is developed for a discrete loop of a general order particularly suitable for digital computation. The deterministic response of the loop to the phase step and the frequency step is investigated. The design of the digital filter for the second-order loop is considered. Use is made of the incremental phase plane to study the phase error behavior of the loop. The model of the noisy loop is derived and the optimization of the loop filter for minimum mean-square error is considered.
NASA Astrophysics Data System (ADS)
Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said
2018-06-01
Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.
NASA Astrophysics Data System (ADS)
Xiong, Qiufen; Hu, Jianglin
2013-05-01
The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.
Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco
2014-01-01
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454
A channel estimation scheme for MIMO-OFDM systems
NASA Astrophysics Data System (ADS)
He, Chunlong; Tian, Chu; Li, Xingquan; Zhang, Ce; Zhang, Shiqi; Liu, Chaowen
2017-08-01
In view of the contradiction of the time-domain least squares (LS) channel estimation performance and the practical realization complexity, a reduced complexity channel estimation method for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) based on pilot is obtained. This approach can transform the complexity of MIMO-OFDM channel estimation problem into a simple single input single output-orthogonal frequency division multiplexing (SISO-OFDM) channel estimation problem and therefore there is no need for large matrix pseudo-inverse, which greatly reduces the complexity of algorithms. Simulation results show that the bit error rate (BER) performance of the obtained method with time orthogonal training sequences and linear minimum mean square error (LMMSE) criteria is better than that of time-domain LS estimator and nearly optimal performance.
On the robustness of a Bayes estimate. [in reliability theory
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1974-01-01
This paper examines the robustness of a Bayes estimator with respect to the assigned prior distribution. A Bayesian analysis for a stochastic scale parameter of a Weibull failure model is summarized in which the natural conjugate is assigned as the prior distribution of the random parameter. The sensitivity analysis is carried out by the Monte Carlo method in which, although an inverted gamma is the assigned prior, realizations are generated using distribution functions of varying shape. For several distributional forms and even for some fixed values of the parameter, simulated mean squared errors of Bayes and minimum variance unbiased estimators are determined and compared. Results indicate that the Bayes estimator remains squared-error superior and appears to be largely robust to the form of the assigned prior distribution.
NASA Astrophysics Data System (ADS)
Hecht-Nielsen, Robert
1997-04-01
A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Adaptive color halftoning for minimum perceived error using the blue noise mask
NASA Astrophysics Data System (ADS)
Yu, Qing; Parker, Kevin J.
1997-04-01
Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
Channel estimation based on quantized MMP for FDD massive MIMO downlink
NASA Astrophysics Data System (ADS)
Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie
2016-10-01
In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.
Artificial Intelligence Techniques for Predicting and Mapping Daily Pan Evaporation
NASA Astrophysics Data System (ADS)
Arunkumar, R.; Jothiprakash, V.; Sharma, Kirty
2017-09-01
In this study, Artificial Intelligence techniques such as Artificial Neural Network (ANN), Model Tree (MT) and Genetic Programming (GP) are used to develop daily pan evaporation time-series (TS) prediction and cause-effect (CE) mapping models. Ten years of observed daily meteorological data such as maximum temperature, minimum temperature, relative humidity, sunshine hours, dew point temperature and pan evaporation are used for developing the models. For each technique, several models are developed by changing the number of inputs and other model parameters. The performance of each model is evaluated using standard statistical measures such as Mean Square Error, Mean Absolute Error, Normalized Mean Square Error and correlation coefficient (R). The results showed that daily TS-GP (4) model predicted better with a correlation coefficient of 0.959 than other TS models. Among various CE models, CE-ANN (6-10-1) resulted better than MT and GP models with a correlation coefficient of 0.881. Because of the complex non-linear inter-relationship among various meteorological variables, CE mapping models could not achieve the performance of TS models. From this study, it was found that GP performs better for recognizing single pattern (time series modelling), whereas ANN is better for modelling multiple patterns (cause-effect modelling) in the data.
Two-body potential model based on cosine series expansion for ionic materials
Oda, Takuji; Weber, William J.; Tanigawa, Hisashi
2015-09-23
There is a method to construct a two-body potential model for ionic materials with a Fourier series basis and we examine it. For this method, the coefficients of cosine basis functions are uniquely determined by solving simultaneous linear equations to minimize the sum of weighted mean square errors in energy, force and stress, where first-principles calculation results are used as the reference data. As a validation test of the method, potential models for magnesium oxide are constructed. The mean square errors appropriately converge with respect to the truncation of the cosine series. This result mathematically indicates that the constructed potentialmore » model is sufficiently close to the one that is achieved with the non-truncated Fourier series and demonstrates that this potential virtually provides minimum error from the reference data within the two-body representation. The constructed potential models work appropriately in both molecular statics and dynamics simulations, especially if a two-step correction to revise errors expected in the reference data is performed, and the models clearly outperform two existing Buckingham potential models that were tested. Moreover, the good agreement over a broad range of energies and forces with first-principles calculations should enable the prediction of materials behavior away from equilibrium conditions, such as a system under irradiation.« less
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng
2018-05-31
For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Wiener-matrix image restoration beyond the sampling passband
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur; Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.
1991-01-01
A finer-than-sampling-lattice resolution image can be obtained using multiresponse image gathering and Wiener-matrix restoration. The multiresponse image gathering weighs the within-passband and aliased signal components differently, allowing the Wiener-matrix restoration filter to unscramble these signal components and restore spatial frequencies beyond the sampling passband of the photodetector array. A multiresponse images can be reassembled into a single minimum mean square error image with a resolution that is sq rt A times finer than the photodetector-array sampling lattice.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard
2008-02-01
In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.
Resolving Mixed Algal Species in Hyperspectral Images
Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.
2014-01-01
We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451
Wang, Rong
2015-01-01
In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
Hao, Z Q; Li, C M; Shen, M; Yang, X Y; Li, K H; Guo, L B; Li, X Y; Lu, Y F; Zeng, X Y
2015-03-23
Laser-induced breakdown spectroscopy (LIBS) with partial least squares regression (PLSR) has been applied to measuring the acidity of iron ore, which can be defined by the concentrations of oxides: CaO, MgO, Al₂O₃, and SiO₂. With the conventional internal standard calibration, it is difficult to establish the calibration curves of CaO, MgO, Al₂O₃, and SiO₂ in iron ore due to the serious matrix effects. PLSR is effective to address this problem due to its excellent performance in compensating the matrix effects. In this work, fifty samples were used to construct the PLSR calibration models for the above-mentioned oxides. These calibration models were validated by the 10-fold cross-validation method with the minimum root-mean-square errors (RMSE). Another ten samples were used as a test set. The acidities were calculated according to the estimated concentrations of CaO, MgO, Al₂O₃, and SiO₂ using the PLSR models. The average relative error (ARE) and RMSE of the acidity achieved 3.65% and 0.0048, respectively, for the test samples.
Dabkiewicz, Vanessa Emídio; de Mello Pereira Abrantes, Shirley; Cassella, Ricardo Jorgensen
2018-08-05
Near infrared spectroscopy (NIR) with diffuse reflectance associated to multivariate calibration has as main advantage the replacement of the physical separation of interferents by the mathematical separation of their signals, rapidly with no need for reagent consumption, chemical waste production or sample manipulation. Seeking to optimize quality control analyses, this spectroscopic analytical method was shown to be a viable alternative to the classical Kjeldahl method for the determination of protein nitrogen in yellow fever vaccine. The most suitable multivariate calibration was achieved by the partial least squares method (PLS) with multiplicative signal correction (MSC) treatment and data mean centering (MC), using a minimum number of latent variables (LV) equal to 1, with the lower value of the square root of the mean squared prediction error (0.00330) associated with the highest percentage value (91%) of samples. Accuracy ranged 95 to 105% recovery in the 4000-5184 cm -1 region. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, Wei; Xu, An-Ding; Liu, Hong-Bin
2015-01-01
Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.
Bernard R. Parresol
1993-01-01
In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...
Iterative Overlap FDE for Multicode DS-CDMA
NASA Astrophysics Data System (ADS)
Takeda, Kazuaki; Tomeba, Hiromichi; Adachi, Fumiyuki
Recently, a new frequency-domain equalization (FDE) technique, called overlap FDE, that requires no GI insertion was proposed. However, the residual inter/intra-block interference (IBI) cannot completely be removed. In addition to this, for multicode direct sequence code division multiple access (DS-CDMA), the presence of residual interchip interference (ICI) after FDE distorts orthogonality among the spreading codes. In this paper, we propose an iterative overlap FDE for multicode DS-CDMA to suppress both the residual IBI and the residual ICI. In the iterative overlap FDE, joint minimum mean square error (MMSE)-FDE and ICI cancellation is repeated a sufficient number of times. The bit error rate (BER) performance with the iterative overlap FDE is evaluated by computer simulation.
NASA Astrophysics Data System (ADS)
Wu, Kang-Hung; Su, Ching-Lun; Chu, Yen-Hsyang
2015-03-01
In this article, we use the International Reference Ionosphere (IRI) model to simulate temporal and spatial distributions of global E region electron densities retrieved by the FORMOSAT-3/COSMIC satellites by means of GPS radio occultation (RO) technique. Despite regional discrepancies in the magnitudes of the E region electron density, the IRI model simulations can, on the whole, describe the COSMIC measurements in quality and quantity. On the basis of global ionosonde network and the IRI model, the retrieval errors of the global COSMIC-measured E region peak electron density (NmE) from July 2006 to July 2011 are examined and simulated. The COSMIC measurement and the IRI model simulation both reveal that the magnitudes of the percentage error (PE) and root mean-square-error (RMSE) of the relative RO retrieval errors of the NmE values are dependent on local time (LT) and geomagnetic latitude, with minimum in the early morning and at high latitudes and maximum in the afternoon and at middle latitudes. In addition, the seasonal variation of PE and RMSE values seems to be latitude dependent. After removing the IRI model-simulated GPS RO retrieval errors from the original COSMIC measurements, the average values of the annual and monthly mean percentage errors of the RO retrieval errors of the COSMIC-measured E region electron density are, respectively, substantially reduced by a factor of about 2.95 and 3.35, and the corresponding root-mean-square errors show averaged decreases of 15.6% and 15.4%, respectively. It is found that, with this process, the largest reduction in the PE and RMSE of the COSMIC-measured NmE occurs at the equatorial anomaly latitudes 10°N-30°N in the afternoon from 14 to 18 LT, with a factor of 25 and 2, respectively. Statistics show that the residual errors that remained in the corrected COSMIC-measured NmE vary in a range of -20% to 38%, which are comparable to or larger than the percentage errors of the IRI-predicted NmE fluctuating in a range of -6.5% to 20%.
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2016-02-01
A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
Darajeh, Negisa; Idris, Azni; Fard Masoumi, Hamid Reza; Nourani, Abolfazl; Truong, Paul; Rezania, Shahabaldin
2017-05-04
Artificial neural networks (ANNs) have been widely used to solve the problems because of their reliable, robust, and salient characteristics in capturing the nonlinear relationships between variables in complex systems. In this study, ANN was applied for modeling of Chemical Oxygen Demand (COD) and biodegradable organic matter (BOD) removal from palm oil mill secondary effluent (POMSE) by vetiver system. The independent variable, including POMSE concentration, vetiver slips density, and removal time, has been considered as input parameters to optimize the network, while the removal percentage of COD and BOD were selected as output. To determine the number of hidden layer nodes, the root mean squared error of testing set was minimized, and the topologies of the algorithms were compared by coefficient of determination and absolute average deviation. The comparison indicated that the quick propagation (QP) algorithm had minimum root mean squared error and absolute average deviation, and maximum coefficient of determination. The importance values of the variables was included vetiver slips density with 42.41%, time with 29.8%, and the POMSE concentration with 27.79%, which showed none of them, is negligible. Results show that the ANN has great potential ability in prediction of COD and BOD removal from POMSE with residual standard error (RSE) of less than 0.45%.
The influence of the uplink noise on the performance of satellite data transmission systems
NASA Astrophysics Data System (ADS)
Dewal, Vrinda P.
The problem of transmission of binary phase shift keying (BPSK) modulated digital data through a bandlimited nonlinear satellite channel in the presence of uplink, downlink Gaussian noise and intersymbol interface is examined. The satellite transponder is represented by a zero memory bandpass nonlinearity, with AM/AM conversion. The proposed optimum linear receiver structure consists of tapped-delay lines followed by a decision device. The linear receiver is designed to minimize the mean square error that is a function of the intersymbol interface, the uplink and the downlink noise. The minimum mean square error equalizer (MMSE) is derived using the Wiener-Kolmogorov theory. In this receiver, the decision about the transmitted signal is made by taking into account the received sequence of present sample, and the interfering past and future samples, which represent the intersymbol interference (ISI). Illustrative examples of the receiver structures are considered for the nonlinear channels with a symmetrical and asymmetrical frequency responses of the transmitter filter. The transponder nonlinearity is simulated by a polynomial using only the first and the third orders terms. A computer simulation determines the tap gain coefficients of the MMSE equalizer that adapt to the various uplink and downlink noise levels. The performance of the MMSE equalizer is evaluated in terms of an estimate of the average probability of error.
NASA Astrophysics Data System (ADS)
Webb, Mathew A.; Hall, Andrew; Kidd, Darren; Minansy, Budiman
2016-05-01
Assessment of local spatial climatic variability is important in the planning of planting locations for horticultural crops. This study investigated three regression-based calibration methods (i.e. traditional versus two optimized methods) to relate short-term 12-month data series from 170 temperature loggers and 4 weather station sites with data series from nearby long-term Australian Bureau of Meteorology climate stations. The techniques trialled to interpolate climatic temperature variables, such as frost risk, growing degree days (GDDs) and chill hours, were regression kriging (RK), regression trees (RTs) and random forests (RFs). All three calibration methods produced accurate results, with the RK-based calibration method delivering the most accurate validation measures: coefficients of determination ( R 2) of 0.92, 0.97 and 0.95 and root-mean-square errors of 1.30, 0.80 and 1.31 °C, for daily minimum, daily maximum and hourly temperatures, respectively. Compared with the traditional method of calibration using direct linear regression between short-term and long-term stations, the RK-based calibration method improved R 2 and reduced root-mean-square error (RMSE) by at least 5 % and 0.47 °C for daily minimum temperature, 1 % and 0.23 °C for daily maximum temperature and 3 % and 0.33 °C for hourly temperature. Spatial modelling indicated insignificant differences between the interpolation methods, with the RK technique tending to be the slightly better method due to the high degree of spatial autocorrelation between logger sites.
NASA Astrophysics Data System (ADS)
Giday, Nigussie M.; Katamzi-Joseph, Zama T.
2018-02-01
This study investigates the performance of a tomographic algorithm, Multi-Instrument and Data Analysis System (MIDAS), during an extended period of 4-14 March 2012, containing moderate and strong geomagnetic storms conditions, over an understudied and data scarce Eastern African longitude sector. Nonetheless, a relatively better distribution of Global Navigation Satellite Systems stations exists along a narrow longitude sector between 30°E and 44°E and latitude range of 30°S and 36°N that spans the equatorial, middle-, and low-latitude ionosphere. Then results are compared with independent global models such as International Reference Ionosphere 2012 (IRI-2012) and global ionosphere map (GIM). MIDAS performance was better than that of the IRI-2012 and GIM models in terms of capturing the diurnal trends as well as the short temporal total electron content (TEC) structures, with least root-mean-square errors (RMSEs). Overall, MIDAS results showed better agreement with the observation vertical TEC (VTEC) with computed maximum correlation coefficient (r) of 0.99 and minimum root-mean-square error (RMSE) of 2.91 TEC unit (1 TECU = 1,016 el m-2 over all the test stations and the validation days. Conversely, for the IRI-2012 and GIM TEC estimates, the corresponding maximum computed r values were 0.93 and 0.99, respectively, while the minimum RMSEs were 13.03 TECU and 6.52 TECU, respectively, for all the test stations and the validation days.
Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin
2018-03-05
The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.
Chemical library subset selection algorithms: a unified derivation using spatial statistics.
Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F
2002-01-01
If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.
Estimation of the simple correlation coefficient.
Shieh, Gwowen
2010-11-01
This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.
Bayesian estimation of the discrete coefficient of determination.
Chen, Ting; Braga-Neto, Ulisses M
2016-12-01
The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.
2016-09-01
mean- square (RMS) error of 0.29° at ə° resolution. For a P4 coded signal, the RMS error in estimating the AOA is 0.32° at 1° resolution. 14...FMCW signal, it was demonstrated that the system is capable of estimating the AOA with a root-mean- square (RMS) error of 0.29° at ə° resolution. For a...Modulator PCB printed circuit board PD photodetector RF radio frequency RMS root-mean- square xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
A methodology based on reduced complexity algorithm for system applications using microprocessors
NASA Technical Reports Server (NTRS)
Yan, T. Y.; Yao, K.
1988-01-01
The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.
NASA Astrophysics Data System (ADS)
Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.
Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error
NASA Astrophysics Data System (ADS)
Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi
2017-12-01
Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Channel estimation in few mode fiber mode division multiplexing transmission system
NASA Astrophysics Data System (ADS)
Hei, Yongqiang; Li, Li; Li, Wentao; Li, Xiaohui; Shi, Guangming
2018-03-01
It is abundantly clear that obtaining the channel state information (CSI) is of great importance for the equalization and detection in coherence receivers. However, to the best of the authors' knowledge, in most of the existing literatures, CSI is assumed to be perfectly known at the receiver. So far, few literature discusses the effects of imperfect CSI on MDM system performance caused by channel estimation. Motivated by that, in this paper, the channel estimation in few mode fiber (FMF) mode division multiplexing (MDM) system is investigated, in which two classical channel estimation methods, i.e., least square (LS) method and minimum mean square error (MMSE) method, are discussed with the assumption of the spatially white noise lumped at the receiver side of MDM system. Both the capacity and BER performance of MDM system affected by mode-dependent gain or loss (MDL) with different channel estimation errors have been studied. Simulation results show that the capacity and BER performance can be further deteriorated in MDM system by the channel estimation, and an 1e-3 variance of channel estimation error is acceptable in MDM system with 0-6 dB MDL values.
Determination of suitable drying curve model for bread moisture loss during baking
NASA Astrophysics Data System (ADS)
Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.
2013-03-01
This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.
A hybrid SVM-FFA method for prediction of monthly mean global solar radiation
NASA Astrophysics Data System (ADS)
Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Zamani, Mazdak; Motamedi, Shervin; Ch, Sudheer
2016-07-01
In this study, a hybrid support vector machine-firefly optimization algorithm (SVM-FFA) model is proposed to estimate monthly mean horizontal global solar radiation (HGSR). The merit of SVM-FFA is assessed statistically by comparing its performance with three previously used approaches. Using each approach and long-term measured HGSR, three models are calibrated by considering different sets of meteorological parameters measured for Bandar Abbass situated in Iran. It is found that the model (3) utilizing the combination of relative sunshine duration, difference between maximum and minimum temperatures, relative humidity, water vapor pressure, average temperature, and extraterrestrial solar radiation shows superior performance based upon all approaches. Moreover, the extraterrestrial radiation is introduced as a significant parameter to accurately estimate the global solar radiation. The survey results reveal that the developed SVM-FFA approach is greatly capable to provide favorable predictions with significantly higher precision than other examined techniques. For the SVM-FFA (3), the statistical indicators of mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and coefficient of determination ( R 2) are 3.3252 %, 0.1859 kWh/m2, 3.7350 %, and 0.9737, respectively which according to the RRMSE has an excellent performance. As a more evaluation of SVM-FFA (3), the ratio of estimated to measured values is computed and found that 47 out of 48 months considered as testing data fall between 0.90 and 1.10. Also, by performing a further verification, it is concluded that SVM-FFA (3) offers absolute superiority over the empirical models using relatively similar input parameters. In a nutshell, the hybrid SVM-FFA approach would be considered highly efficient to estimate the HGSR.
Analysis of tractable distortion metrics for EEG compression applications.
Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando
2012-07-01
Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.
MIMO equalization with adaptive step size for few-mode fiber transmission systems.
van Uden, Roy G H; Okonkwo, Chigo M; Sleiffer, Vincent A J M; de Waardt, Hugo; Koonen, Antonius M J
2014-01-13
Optical multiple-input multiple-output (MIMO) transmission systems generally employ minimum mean squared error time or frequency domain equalizers. Using an experimental 3-mode dual polarization coherent transmission setup, we show that the convergence time of the MMSE time domain equalizer (TDE) and frequency domain equalizer (FDE) can be reduced by approximately 50% and 30%, respectively. The criterion used to estimate the system convergence time is the time it takes for the MIMO equalizer to reach an average output error which is within a margin of 5% of the average output error after 50,000 symbols. The convergence reduction difference between the TDE and FDE is attributed to the limited maximum step size for stable convergence of the frequency domain equalizer. The adaptive step size requires a small overhead in the form of a lookup table. It is highlighted that the convergence time reduction is achieved without sacrificing optical signal-to-noise ratio performance.
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
NASA Astrophysics Data System (ADS)
Moaveni, Bijan; Khosravi Roqaye Abad, Mahdi; Nasiri, Sayyad
2015-10-01
In this paper, vehicle longitudinal velocity during the braking process is estimated by measuring the wheels speed. Here, a new algorithm based on the unknown input Kalman filter is developed to estimate the vehicle longitudinal velocity with a minimum mean square error and without using the value of braking torque in the estimation procedure. The stability and convergence of the filter are analysed and proved. Effectiveness of the method is shown by designing a real experiment and comparing the estimation result with actual longitudinal velocity computing from a three-axis accelerometer output.
Preisig, James C
2005-07-01
Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
NASA Astrophysics Data System (ADS)
Samadi-Maybodi, Abdolraouf; Darzi, S. K. Hassani Nejad
2008-10-01
Resolution of binary mixtures of vitamin B12, methylcobalamin and B12 coenzyme with minimum sample pre-treatment and without analyte separation has been successfully achieved by methods of partial least squares algorithm with one dependent variable (PLS1), orthogonal signal correction/partial least squares (OSC/PLS), principal component regression (PCR) and hybrid linear analysis (HLA). Data of analysis were obtained from UV-vis spectra. The UV-vis spectra of the vitamin B12, methylcobalamin and B12 coenzyme were recorded in the same spectral conditions. The method of central composite design was used in the ranges of 10-80 mg L -1 for vitamin B12 and methylcobalamin and 20-130 mg L -1 for B12 coenzyme. The models refinement procedure and validation were performed by cross-validation. The minimum root mean square error of prediction (RMSEP) was 2.26 mg L -1 for vitamin B12 with PLS1, 1.33 mg L -1 for methylcobalamin with OSC/PLS and 3.24 mg L -1 for B12 coenzyme with HLA techniques. Figures of merit such as selectivity, sensitivity, analytical sensitivity and LOD were determined for three compounds. The procedure was successfully applied to simultaneous determination of three compounds in synthetic mixtures and in a pharmaceutical formulation.
Duan, Hanjun; Wu, Haifeng; Zeng, Yu; Chen, Yuebin
2016-03-26
In a passive ultra-high frequency (UHF) radio-frequency identification (RFID) system, tag collision is generally resolved on a medium access control (MAC) layer. However, some of collided tag signals could be recovered on a physical (PHY) layer and, thus, enhance the identification efficiency of the RFID system. For the recovery on the PHY layer, channel estimation is a critical issue. Good channel estimation will help to recover the collided signals. Existing channel estimates work well for two collided tags. When the number of collided tags is beyond two, however, the existing estimates have more estimation errors. In this paper, we propose a novel channel estimate for the UHF RFID system. It adopts an orthogonal matrix based on the information of preambles which is known for a reader and applies a minimum-mean-square-error (MMSE) criterion to estimate channels. From the estimated channel, we could accurately separate the collided signals and recover them. By means of numerical results, we show that the proposed estimate has lower estimation errors and higher separation efficiency than the existing estimates.
Perez-Guaita, David; Kuligowski, Julia; Quintás, Guillermo; Garrigues, Salvador; Guardia, Miguel de la
2013-03-30
Locally weighted partial least squares regression (LW-PLSR) has been applied to the determination of four clinical parameters in human serum samples (total protein, triglyceride, glucose and urea contents) by Fourier transform infrared (FTIR) spectroscopy. Classical LW-PLSR models were constructed using different spectral regions. For the selection of parameters by LW-PLSR modeling, a multi-parametric study was carried out employing the minimum root-mean square error of cross validation (RMSCV) as objective function. In order to overcome the effect of strong matrix interferences on the predictive accuracy of LW-PLSR models, this work focuses on sample selection. Accordingly, a novel strategy for the development of local models is proposed. It was based on the use of: (i) principal component analysis (PCA) performed on an analyte specific spectral region for identifying most similar sample spectra and (ii) partial least squares regression (PLSR) constructed using the whole spectrum. Results found by using this strategy were compared to those provided by PLSR using the same spectral intervals as for LW-PLSR. Prediction errors found by both, classical and modified LW-PLSR improved those obtained by PLSR. Hence, both proposed approaches were useful for the determination of analytes present in a complex matrix as in the case of human serum samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Boiret, Mathieu; Meunier, Loïc; Ginot, Yves-Michel
2011-02-20
A near infrared (NIR) method was developed for determination of tablet potency of active pharmaceutical ingredient (API) in a complex coated tablet matrix. The calibration set contained samples from laboratory and production scale batches. The reference values were obtained by high performance liquid chromatography (HPLC) and partial least squares (PLS) regression was used to establish a model. The model was challenged by calculating tablet potency of two external test sets. Root mean square errors of prediction were respectively equal to 2.0% and 2.7%. To use this model with a second spectrometer from the production field, a calibration transfer method called piecewise direct standardisation (PDS) was used. After the transfer, the root mean square error of prediction of the first test set was 2.4% compared to 4.0% without transferring the spectra. A statistical technique using bootstrap of PLS residuals was used to estimate confidence intervals of tablet potency calculations. This method requires an optimised PLS model, selection of the bootstrap number and determination of the risk. In the case of a chemical analysis, the tablet potency value will be included within the confidence interval calculated by the bootstrap method. An easy to use graphical interface was developed to easily determine if the predictions, surrounded by minimum and maximum values, are within the specifications defined by the regulatory organisation. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao
2011-05-01
According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).
Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons
1989-03-25
error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given
Duong, Minh V; Nguyen, Hieu T; Mai, Tam V-T; Huynh, Lam K
2018-01-03
Master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) has shown to be a powerful framework for modeling kinetic and dynamic behaviors of a complex gas-phase chemical system on a complicated multiple-species and multiple-channel potential energy surface (PES) for a wide range of temperatures and pressures. Derived from the ME time-resolved species profiles, the macroscopic or phenomenological rate coefficients are essential for many reaction engineering applications including those in combustion and atmospheric chemistry. Therefore, in this study, a least-squares-based approach named Global Minimum Profile Error (GMPE) was proposed and implemented in the MultiSpecies-MultiChannel (MSMC) code (Int. J. Chem. Kinet., 2015, 47, 564) to extract macroscopic rate coefficients for such a complicated system. The capability and limitations of the new approach were discussed in several well-defined test cases.
Effect of nonideal square-law detection on static calibration in noise-injection radiometers
NASA Technical Reports Server (NTRS)
Hearn, C. P.
1984-01-01
The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.
VizieR Online Data Catalog: delta Cep VEGA/CHARA observing log (Nardetto+, 2016)
NASA Astrophysics Data System (ADS)
Nardetto, N.; Merand, A.; Mourard, D.; Storm, J.; Gieren, W.; Fouque, P.; Gallenne, A.; Graczyk, D.; Kervella, P.; Neilson, H.; Pietrzynski, G.; Pilecki, B.; Breitfelder, J.; Berio, P.; Challouf, M.; Clausse, J.-M.; Ligi, R.; Mathias, P.; Meilland, A.; Perraut, K.; Poretti, E.; Rainer, M.; Spang, A.; Stee, P.; Tallon-Bosc, I.; Ten Brummelaar, T.
2016-07-01
The columns give, respectively, the date, the RJD, the hour angle (HA), the minimum and maximum wavelengths over which the squared visibility is calculated, the projected baseline length Bp and its orientation PA, the signal-to-noise ratio on the fringe peak; the last column provides the calibrated squared visibility V2 together with the statistic error on V2, and the systematic error on V2 (see text for details). The data are available on the Jean-Marie Mariotti Center OiDB service (Available at http://oidb.jmmc.fr). (1 data file).
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
NASA Astrophysics Data System (ADS)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Some Results on Mean Square Error for Factor Score Prediction
ERIC Educational Resources Information Center
Krijnen, Wim P.
2006-01-01
For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…
ERIC Educational Resources Information Center
Savalei, Victoria
2012-01-01
The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Digital transceiver design for two-way AF-MIMO relay systems with imperfect CSI
NASA Astrophysics Data System (ADS)
Hu, Chia-Chang; Chou, Yu-Fei; Chen, Kui-He
2013-09-01
In the paper, combined optimization of the terminal precoders/equalizers and single-relay precoder is proposed for an amplify-and-forward (AF) multiple-input multiple-output (MIMO) two-way single-relay system with correlated channel uncertainties. Both terminal transceivers and relay precoding matrix are designed based on the minimum mean square error (MMSE) criterion when terminals are unable to erase completely self-interference due to imperfect correlated channel state information (CSI). This robust joint optimization problem of beamforming and precoding matrices under power constraints belongs to neither concave nor convex so that a nonlinear matrix-form conjugate gradient (MCG) algorithm is applied to explore local optimal solutions. Simulation results show that the robust transceiver design is able to overcome effectively the loss of bit-error-rate (BER) due to inclusion of correlated channel uncertainties and residual self-interference.
NASA Astrophysics Data System (ADS)
Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki
Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.
About an adaptively weighted Kaplan-Meier estimate.
Plante, Jean-François
2009-09-01
The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.
Simple Form of MMSE Estimator for Super-Gaussian Prior Densities
NASA Astrophysics Data System (ADS)
Kittisuwan, Pichid
2015-04-01
The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.
Optimal estimation of the optomechanical coupling strength
NASA Astrophysics Data System (ADS)
Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André
2018-06-01
We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.
Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction.
Heintz, Brittany D; Keenan, Kevin G
2018-01-01
Dexterity impairments are well documented in older adults, though it is unclear how these influence touchscreen manipulation. This study examined age-related differences while tracing on high- and low-friction touchscreens using the finger or stylus. 26 young and 24 older adults completed an Archimedes spiral tracing task on a touchscreen mounted on a force sensor. Root mean square error was calculated to quantify performance. Root mean square error increased by 29.9% for older vs. young adults using the fingertip, but was similar to young adults when using the stylus. Although other variables (e.g., touchscreen usage, sensation, and reaction time) differed between age groups, these variables were not related to increased error in older adults while using their fingertip. Root mean square error also increased on the low-friction surface for all subjects. These findings suggest that utilizing a stylus and increasing surface friction may improve touchscreen use in older adults.
NASA Technical Reports Server (NTRS)
Piersol, Allan G.
1991-01-01
Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty
Lu, Yang; Loizou, Philipos C.
2011-01-01
Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543
A Hybrid Multiuser Detector Based on MMSE and AFSA for TDRS System Forward Link
Yin, Zhendong; Liu, Xiaohui
2014-01-01
This study mainly focuses on multiuser detection in tracking and data relay satellite (TDRS) system forward link. Minimum mean square error (MMSE) is a low complexity multiuser detection method, but MMSE detector cannot achieve satisfactory bit error ratio and near-far resistance, whereas artificial fish swarm algorithm (AFSA) is expert in optimization and it can realize the global convergence efficiently. Therefore, a hybrid multiuser detector based on MMSE and AFSA (MMSE-AFSA) is proposed in this paper. The result of MMSE and its modified formations are used as the initial values of artificial fishes to accelerate the speed of global convergence and reduce the iteration times for AFSA. The simulation results show that the bit error ratio and near-far resistance performances of the proposed detector are much better, compared with MF, DEC, and MMSE, and are quite close to OMD. Furthermore, the proposed MMSE-AFSA detector also has a large system capacity. PMID:24883418
Regional application of multi-layer artificial neural networks in 3-D ionosphere tomography
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2016-08-01
Tomography is a very cost-effective method to study physical properties of the ionosphere. In this paper, residual minimization training neural network (RMTNN) is used in voxel-based tomography to reconstruct of 3-D ionosphere electron density with high spatial resolution. For numerical experiments, observations collected at 37 GPS stations from Iranian permanent GPS network (IPGN) are used. A smoothed TEC approach was used for absolute STEC recovery. To improve the vertical resolution, empirical orthogonal functions (EOFs) obtained from international reference ionosphere 2012 (IRI-2012) used as object function in training neural network. Ionosonde observations is used for validate reliability of the proposed method. Minimum relative error for RMTNN is 1.64% and maximum relative error is 15.61%. Also root mean square error (RMSE) of 0.17 × 1011 (electrons/m3) is computed for RMTNN which is less than RMSE of IRI2012. The results show that RMTNN has higher accuracy and compiles speed than other ionosphere reconstruction methods.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Ultrasonic tracking of shear waves using a particle filter.
Ingle, Atul N; Ma, Chi; Varghese, Tomy
2015-11-01
This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques.
Distribution of kriging errors, the implications and how to communicate them
NASA Astrophysics Data System (ADS)
Li, Hong Yi; Milne, Alice; Webster, Richard
2016-04-01
Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.
Distribution of kriging errors, the implications and how to communicate them
NASA Astrophysics Data System (ADS)
Li, HongYi; Milne, Alice; Webster, Richard
2015-04-01
Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
ERIC Educational Resources Information Center
Pan, Tianshu; Yin, Yue
2012-01-01
In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…
ERIC Educational Resources Information Center
Li, Libo; Bentler, Peter M.
2011-01-01
MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…
A root-mean-square approach for predicting fatigue crack growth under random loading
NASA Technical Reports Server (NTRS)
Hudson, C. M.
1981-01-01
A method for predicting fatigue crack growth under random loading which employs the concept of Barsom (1976) is presented. In accordance with this method, the loading history for each specimen is analyzed to determine the root-mean-square maximum and minimum stresses, and the predictions are made by assuming the tests have been conducted under constant-amplitude loading at the root-mean-square maximum and minimum levels. The procedure requires a simple computer program and a desk-top computer. For the eleven predictions made, the ratios of the predicted lives to the test lives ranged from 2.13 to 0.82, which is a good result, considering that the normal scatter in the fatigue-crack-growth rates may range from a factor of two to four under identical loading conditions.
Linearized finite-element method solution of the ion-exchange nonlinear diffusion model
NASA Astrophysics Data System (ADS)
Badr, Mohamed M.; Swillam, Mohamed A.
2017-04-01
Ion-exchange process is one of the most common techniques used in glass waveguide fabrication. This has many advantages, such as low cost, ease of implementation, and simple equipment requirements. The technology is based on the substitution of some of the host ions in the glass (typically Na+) with other ions that possess different characteristics in terms of size and polarizability. The newly diffused ions produce a region with a relatively higher refractive index in which the light could be guided. A critical issue arises when it comes to designing such waveguides, which is carefully and precisely determining the resultant index profile. This task has been proven to be hideous as the process is generally governed by a nonlinear diffusion model with no direct general analytical solution. Furthermore, numerical solutions become unreliable-in terms of stability and mean squared error-in some cases, especially the K+-Na+ ion-exchanged waveguide, which is the best candidate to produce waveguides with refractive index differences compatible with those of the commercially available optical fibers. Linearized finite-element method formulations were used to provide a reliable tool that could solve the nonlinear diffusion model of the ion-exchange in both one- and two-dimensional spaces. Additionally, the annealed channel waveguide case has been studied. In all cases, unprecedented stability and minimum mean squared error could be achieved.
De-noising of 3D multiple-coil MR images using modified LMMSE estimator.
Yaghoobi, Nima; Hasanzadeh, Reza P R
2018-06-20
De-noising is a crucial topic in Magnetic Resonance Imaging (MRI) which focuses on less loss of Magnetic Resonance (MR) image information and details preservation during the noise suppression. Nowadays multiple-coil MRI system is preferred to single one due to its acceleration in the imaging process. Due to the fact that the model of noise in single-coil and multiple-coil MRI systems are different, the de-noising methods that mostly are adapted to single-coil MRI systems, do not work appropriately with multiple-coil one. The model of noise in single-coil MRI systems is Rician while in multiple-coil one (if no subsampling occurs in k-space or GRAPPA reconstruction process is being done in the coils), it obeys noncentral Chi (nc-χ). In this paper, a new filtering method based on the Linear Minimum Mean Square Error (LMMSE) estimator is proposed for multiple-coil MR Images ruined by nc-χ noise. In the presented method, to have an optimum similarity selection of voxels, the Bayesian Mean Square Error (BMSE) criterion is used and proved for nc-χ noise model and also a nonlocal voxel selection methodology is proposed for nc-χ distribution. The results illustrate robust and accurate performance compared to the related state-of-the-art methods, either on ideal nc-χ images or GRAPPA reconstructed ones. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2016-08-01
In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
A hybrid demodulation method of fiber-optic Fabry-Perot pressure sensor
NASA Astrophysics Data System (ADS)
Yu, Le; Lang, Jianjun; Pan, Yong; Wu, Di; Zhang, Min
2013-12-01
The fiber-optic Fabry-Perot pressure sensors have been widely applied to measure pressure in oilfield. For multi-well it will take a long time (dozens of seconds) to demodulate downhole pressure values of all wells by using only one demodulation system and it will cost a lot when every well is equipped with one system, which heavily limits the sensor applied in oilfield. In present paper, a new hybrid demodulation method, combining the windowed nonequispaced discrete Fourier Transform (nDFT) method with segment search minimum mean square error estimation (MMSE) method, was developed, by which the demodulation time can be reduced to 200ms, i.e., measuring 10 channels/wells was less than 2s. Besides, experimental results showed the demodulation cavity length of the fiber-optic Fabry-Perot sensor has a maximum error of 0.5 nm and consequently pressure measurement accuracy can reach 0.4% F.S.
NASA Astrophysics Data System (ADS)
Yamamoto, Tetsuya; Takeda, Kazuki; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. To further improve the BER performance, cyclic delay transmit diversity (CDTD) can be used. CDTD simultaneously transmits the same signal from different antennas after adding different cyclic delays to increase the number of equivalent propagation paths. Although a joint use of CDTD and MMSE-FDE for direct sequence code division multiple access (DS-CDMA) achieves larger frequency diversity gain, the BER performance improvement is limited by the residual inter-chip interference (ICI) after FDE. In this paper, we propose joint FDE and despreading for DS-CDMA using CDTD. Equalization and despreading are simultaneously performed in the frequency-domain to suppress the residual ICI after FDE. A theoretical conditional BER analysis is presented for the given channel condition. The BER analysis is confirmed by computer simulation.
Evaluating CMA equalization of SOQPSK-TG data for aeronautical telemetry
NASA Astrophysics Data System (ADS)
Cole-Rhodes, Arlene; KoneDossongui, Serge; Umuolo, Henry; Rice, Michael
2015-05-01
This paper presents the results of using a constant modulus algorithm (CMA) to recover shaped offset quadrature-phase shift keying (SOQPSK)-TG modulated data, which has been transmitted using the iNET data packet structure. This standard is defined and used for aeronautical telemetry. Based on the iNET-packet structure, the adaptive block processing CMA equalizer can be initialized using the minimum mean square error (MMSE) equalizer [3]. This CMA equalizer is being evaluated for use on iNET structured data, with initial tests being conducted on measured data which has been received in a controlled laboratory environment. Thus the CMA equalizer is applied at the receiver to data packets which have been experimentally generated in order to determine the feasibility of our equalization approach, and its performance is compared to that of the MMSE equalizer. Performance evaluation is based on computed bit error rate (BER) counts for these equalizers.
Robust parameter design for automatically controlled systems and nanostructure synthesis
NASA Astrophysics Data System (ADS)
Dasgupta, Tirthankar
2007-12-01
This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
NASA Astrophysics Data System (ADS)
Li, Shuailing; Shao, Qingsong; Lu, Zhonghua; Duan, Chengli; Yi, Haojun; Su, Liyang
2018-02-01
Saffron is an expensive spice. Its primary effective constituents are crocin I and II, and the contents of these compounds directly affect the quality and commercial value of saffron. In this study, near-infrared spectroscopy was combined with chemometric techniques for the determination of crocin I and II in saffron. Partial least squares regression models were built for the quantification of crocin I and II. By comparing different spectral ranges and spectral pretreatment methods (no pretreatment, vector normalization, subtract a straight line, multiplicative scatter correction, minimum-maximum normalization, eliminate the constant offset, first derivative, and second derivative), optimum models were developed. The root mean square error of cross-validation values of the best partial least squares models for crocin I and II were 1.40 and 0.30, respectively. The coefficients of determination for crocin I and II were 93.40 and 96.30, respectively. These results show that near-infrared spectroscopy can be combined with chemometric techniques to determine the contents of crocin I and II in saffron quickly and efficiently.
Analysis of randomly time varying systems by gaussian closure technique
NASA Astrophysics Data System (ADS)
Dash, P. K.; Iyengar, R. N.
1982-07-01
The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.
An information theory of image gathering
NASA Technical Reports Server (NTRS)
Fales, Carl L.; Huck, Friedrich O.
1991-01-01
Shannon's mathematical theory of communication is extended to image gathering. Expressions are obtained for the total information that is received with a single image-gathering channel and with parallel channels. It is concluded that the aliased signal components carry information even though these components interfere with the within-passband components in conventional image gathering and restoration, thereby degrading the fidelity and visual quality of the restored image. An examination of the expression for minimum mean-square-error, or Wiener-matrix, restoration from parallel image-gathering channels reveals a method for unscrambling the within-passband and aliased signal components to restore spatial frequencies beyond the sampling passband out to the spatial frequency response cutoff of the optical aperture.
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
NASA Astrophysics Data System (ADS)
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
An empirical model for estimating solar radiation in the Algerian Sahara
NASA Astrophysics Data System (ADS)
Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous
2018-05-01
The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.
The Influence of Dimensionality on Estimation in the Partial Credit Model.
ERIC Educational Resources Information Center
De Ayala, R. J.
1995-01-01
The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
Meta-regression approximations to reduce publication selection bias.
Stanley, T D; Doucouliagos, Hristos
2014-03-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.
Applicability of AgMERRA Forcing Dataset to Fill Gaps in Historical in-situ Meteorological Data
NASA Astrophysics Data System (ADS)
Bannayan, M.; Lashkari, A.; Zare, H.; Asadi, S.; Salehnia, N.
2015-12-01
Integrated assessment studies of food production systems use crop models to simulate the effects of climate and socio-economic changes on food security. Climate forcing data is one of those key inputs of crop models. This study evaluated the performance of AgMERRA climate forcing dataset to fill gaps in historical in-situ meteorological data for different climatic regions of Iran. AgMERRA dataset intercompared with in- situ observational dataset for daily maximum and minimum temperature and precipitation during 1980-2010 periods via Root Mean Square error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) for 17 stations in four climatic regions included humid and moderate, cold, dry and arid, hot and humid. Moreover, probability distribution function and cumulative distribution function compared between model and observed data. The results of measures of agreement between AgMERRA data and observed data demonstrated that there are small errors in model data for all stations. Except for stations which are located in cold regions, model data in the other stations illustrated under-prediction for daily maximum temperature and precipitation. However, it was not significant. In addition, probability distribution function and cumulative distribution function showed the same trend for all stations between model and observed data. Therefore, the reliability of AgMERRA dataset is high to fill gaps in historical observations in different climatic regions of Iran as well as it could be applied as a basis for future climate scenarios.
Zhu, Wen-Jing; Mao, Han-Ping; Li, Qing-Lin; Liu, Hong-Yu; Sun, Jun; Zuo, Zhi-Yu; Chen, Yong
2014-09-01
With 25%, 50%, 75%, 100% and 150%, five levels of, nitrogen (N), phosphorus (P) and potassium (K) nutrition stress samples cultivated in Venlo type greenhouse soilless cultivation mode as the research object, polarized reflectance spectra and hyperspectral images of different nutrient deficiency greenhouse tomato leaves were acquired by using polarized reflectance spectroscopy system developed by our own research group and hyperspectral imaging system respectively. The relationship between a certain number of changes in the bump and texture of non-smooth surface of the nutrient stress leaf and the level of polarization reflected radiation was clarified by scanning electron microscopy (SEM). On the one hand, the polarization spectrum was converted into the degree of polarization through Stokes equation, and the four polarization characteristics between the polarization spectroscopy and reference measurement values of N, P and K respectively were extracted. On the other hand, the four characteristic wavelengths of N, P, K hyperspectral image data were determined respectively through the principal component analysis, followed by eight hyperspectral texture features extracted corresponding to the four characteristic wavelengths through correlation analysis. Polarization characteristics and hyperspectral texture features combined with each characteristics of N, P, K were extracted. These 12 characteristic variables were normalized by maximum-minimum value method. N, P, K nutrient levels quantitative diagnostic models were established by SVR. Results of models are as follows: the correlation coefficient of nitrogen r = 0.961 8, root mean square error RMSE= 0.451; correlation coefficient of phosphorus r = 0.916 3, root mean square error RMSE = 0.620; correlation coefficient of potassium r = 0.940 6, root mean square error RMSE = 0.494. The results show that high precision tomato leaves nutrition prediction model could be built by using polarized reflectance spectroscopy combined with high spectral information fusion technology and achieve good diagnoses effect. It has a great significance for the improvement of model accuracy and the development of special instruments. The research provides a new idea for the rapid detection of tomato nutrient content.
Triki Fourati, Hela; Bouaziz, Moncef; Benzina, Mourad; Bouaziz, Samir
2017-04-01
Traditional surveying methods of soil properties over landscapes are dramatically cost and time-consuming. Thus, remote sensing is a proper choice for monitoring environmental problem. This research aims to study the effect of environmental factors on soil salinity and to map the spatial distribution of this salinity over the southern east part of Tunisia by means of remote sensing and geostatistical techniques. For this purpose, we used Advanced Spaceborne Thermal Emission and Reflection Radiometer data to depict geomorphological parameters: elevation, slope, plan curvature (PLC), profile curvature (PRC), and aspect. Pearson correlation between these parameters and soil electrical conductivity (EC soil ) showed that mainly slope and elevation affect the concentration of salt in soil. Moreover, spectral analysis illustrated the high potential of short-wave infrared (SWIR) bands to identify saline soils. To map soil salinity in southern Tunisia, ordinary kriging (OK), minimum distance (MD) classification, and simple regression (SR) were used. The findings showed that ordinary kriging technique provides the most reliable performances to identify and classify saline soils over the study area with a root mean square error of 1.83 and mean error of 0.018.
NASA Astrophysics Data System (ADS)
Kim, Young-Rok; Park, Eunseo; Choi, Eun-Jung; Park, Sang-Young; Park, Chandeok; Lim, Hyung-Chul
2014-09-01
In this study, genetic resampling (GRS) approach is utilized for precise orbit determination (POD) using the batch filter based on particle filtering (PF). Two genetic operations, which are arithmetic crossover and residual mutation, are used for GRS of the batch filter based on PF (PF batch filter). For POD, Laser-ranging Precise Orbit Determination System (LPODS) and satellite laser ranging (SLR) observations of the CHAMP satellite are used. Monte Carlo trials for POD are performed by one hundred times. The characteristics of the POD results by PF batch filter with GRS are compared with those of a PF batch filter with minimum residual resampling (MRRS). The post-fit residual, 3D error by external orbit comparison, and POD repeatability are analyzed for orbit quality assessments. The POD results are externally checked by NASA JPL’s orbits using totally different software, measurements, and techniques. For post-fit residuals and 3D errors, both MRRS and GRS give accurate estimation results whose mean root mean square (RMS) values are at a level of 5 cm and 10-13 cm, respectively. The mean radial orbit errors of both methods are at a level of 5 cm. For POD repeatability represented as the standard deviations of post-fit residuals and 3D errors by repetitive PODs, however, GRS yields 25% and 13% more robust estimation results than MRRS for post-fit residual and 3D error, respectively. This study shows that PF batch filter with GRS approach using genetic operations is superior to PF batch filter with MRRS in terms of robustness in POD with SLR observations.
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Louis A; Mason, John J.
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics
NASA Astrophysics Data System (ADS)
Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane
2014-10-01
This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...
Ultrasonic tracking of shear waves using a particle filter
Ingle, Atul N.; Ma, Chi; Varghese, Tomy
2015-01-01
Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761
A Stochastic Total Least Squares Solution of Adaptive Filtering Problem
Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs. PMID:24688412
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Retrieval of the aerosol optical thickness from UV global irradiance measurements
NASA Astrophysics Data System (ADS)
Costa, M. J.; Salgueiro, V.; Bortoli, D.; Obregón, M. A.; Antón, M.; Silva, A. M.
2015-12-01
The UV irradiance is measured at Évora since several years, where a CIMEL sunphotometer integrated in AERONET is also installed. In the present work, measurements of UVA (315 - 400 nm) irradiances taken with Kipp&Zonen radiometers, as well as satellite data of ozone total column values, are used in combination with radiative transfer calculations, to estimate the aerosol optical thickness (AOT) in the UV. The retrieved UV AOT in Évora is compared with AERONET AOT (at 340 and 380 nm) and a fairly good agreement is found with a root mean square error of 0.05 (normalized root mean square error of 8.3%) and a mean absolute error of 0.04 (mean percentage error of 2.9%). The methodology is then used to estimate the UV AOT in Sines, an industrialized site on the Atlantic western coast, where the UV irradiance is monitored since 2013 but no aerosol information is available.
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Medina, K.D.; Tasker, Gary D.
1987-01-01
This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Monthly ENSO Forecast Skill and Lagged Ensemble Size
DelSole, T.; Tippett, M.K.; Pegion, K.
2018-01-01
Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973
Monthly ENSO Forecast Skill and Lagged Ensemble Size
NASA Astrophysics Data System (ADS)
Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.
2018-04-01
The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.
Climatological Modeling of Monthly Air Temperature and Precipitation in Egypt through GIS Techniques
NASA Astrophysics Data System (ADS)
El Kenawy, A.
2009-09-01
This paper describes a method for modeling and mapping four climatic variables (maximum temperature, minimum temperature, mean temperature and total precipitation) in Egypt using a multiple regression approach implemented in a GIS environment. In this model, a set of variables including latitude, longitude, elevation within a distance of 5, 10 and 15 km, slope, aspect, distance to the Mediterranean Sea, distance to the Red Sea, distance to the Nile, ratio between land and water masses within a radius of 5, 10, 15 km, the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), the Normalized Difference Temperature Index (NDTI) and reflectance are included as independent variables. These variables were integrated as raster layers in MiraMon software at a spatial resolution of 1 km. Climatic variables were considered as dependent variables and averaged from quality controlled and homogenized 39 series distributing across the entire country during the period of (1957-2006). For each climatic variable, digital and objective maps were finally obtained using the multiple regression coefficients at monthly, seasonal and annual timescale. The accuracy of these maps were assessed through cross-validation between predicted and observed values using a set of statistics including coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean bias Error (MBE) and D Willmott statistic. These maps are valuable in the sense of spatial resolution as well as the number of observatories involved in the current analysis.
Initializing a Mesoscale Boundary-Layer Model with Radiosonde Observations
NASA Astrophysics Data System (ADS)
Berri, Guillermo J.; Bertossa, Germán
2018-01-01
A mesoscale boundary-layer model is used to simulate low-level regional wind fields over the La Plata River of South America, a region characterized by a strong daily cycle of land-river surface-temperature contrast and low-level circulations of sea-land breeze type. The initial and boundary conditions are defined from a limited number of local observations and the upper boundary condition is taken from the only radiosonde observations available in the region. The study considers 14 different upper boundary conditions defined from the radiosonde data at standard levels, significant levels, level of the inversion base and interpolated levels at fixed heights, all of them within the first 1500 m. The period of analysis is 1994-2008 during which eight daily observations from 13 weather stations of the region are used to validate the 24-h surface-wind forecast. The model errors are defined as the root-mean-square of relative error in wind-direction frequency distribution and mean wind speed per wind sector. Wind-direction errors are greater than wind-speed errors and show significant dispersion among the different upper boundary conditions, not present in wind speed, revealing a sensitivity to the initialization method. The wind-direction errors show a well-defined daily cycle, not evident in wind speed, with the minimum at noon and the maximum at dusk, but no systematic deterioration with time. The errors grow with the height of the upper boundary condition level, in particular wind direction, and double the errors obtained when the upper boundary condition is defined from the lower levels. The conclusion is that defining the model upper boundary condition from radiosonde data closer to the ground minimizes the low-level wind-field errors throughout the region.
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-09-19
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2-3.9 cm and 4.8-5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8-24.7 cm and a minimum of 3.1-6.9 cm.
Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
Simple Forest Canopy Thermal Exitance Model
NASA Technical Reports Server (NTRS)
Smith J. A.; Goltz, S. M.
1999-01-01
We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.
A new open-loop fiber optic gyro error compensation method based on angular velocity error modeling.
Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing
2015-02-27
With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.42%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity.
A New Open-Loop Fiber Optic Gyro Error Compensation Method Based on Angular Velocity Error Modeling
Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing
2015-01-01
With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.2%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity. PMID:25734642
On a stronger-than-best property for best prediction
NASA Astrophysics Data System (ADS)
Teunissen, P. J. G.
2008-03-01
The minimum mean squared error (MMSE) criterion is a popular criterion for devising best predictors. In case of linear predictors, it has the advantage that no further distributional assumptions need to be made, other then about the first- and second-order moments. In the spatial and Earth sciences, it is the best linear unbiased predictor (BLUP) that is used most often. Despite the fact that in this case only the first- and second-order moments need to be known, one often still makes statements about the complete distribution, in particular when statistical testing is involved. For such cases, one can do better than the BLUP, as shown in Teunissen (J Geod. doi: 10.1007/s00190-007-0140-6, 2006), and thus devise predictors that have a smaller MMSE than the BLUP. Hence, these predictors are to be preferred over the BLUP, if one really values the MMSE-criterion. In the present contribution, we will show, however, that the BLUP has another optimality property than the MMSE-property, provided that the distribution is Gaussian. It will be shown that in the Gaussian case, the prediction error of the BLUP has the highest possible probability of all linear unbiased predictors of being bounded in the weighted squared norm sense. This is a stronger property than the often advertised MMSE-property of the BLUP.
Planting data and wheat yield models. [Kansas, South Dakota, and U.S.S.R.
NASA Technical Reports Server (NTRS)
Feyerherm, A. M. (Principal Investigator)
1977-01-01
The author has identified the following significant results. A variable date starter model for spring wheat depending on temperature was more precise than a fixed date model. The same conclusions for fall-planted wheat were not reached. If the largest and smallest of eight temperatures were used to estimate daily maximum and minimum temperatures; respectively, a 1-4 F bias would be introduced into these extremes. For Kansas, a reduction of 0.5 bushels/acre in the root-mean-square-error between model and SRS yields was achieved by a six fold increase (7 to 42) in the density of weather stations. An additional reduction of 0.3 b/A was achieved by incorporating losses due to rusts in the model.
Ribic, C.A.; Miller, T.W.
1998-01-01
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.
Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.
Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan
2018-01-12
Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).
Super-linear Precision in Simple Neural Population Codes
NASA Astrophysics Data System (ADS)
Schwab, David; Fiete, Ila
2015-03-01
A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.
A Framework of Covariance Projection on Constraint Manifold for Data Fusion.
Bakr, Muhammad Abu; Lee, Sukhan
2018-05-17
A general framework of data fusion is presented based on projecting the probability distribution of true states and measurements around the predicted states and actual measurements onto the constraint manifold. The constraint manifold represents the constraints to be satisfied among true states and measurements, which is defined in the extended space with all the redundant sources of data such as state predictions and measurements considered as independent variables. By the general framework, we mean that it is able to fuse any correlated data sources while directly incorporating constraints and identifying inconsistent data without any prior information. The proposed method, referred to here as the Covariance Projection (CP) method, provides an unbiased and optimal solution in the sense of minimum mean square error (MMSE), if the projection is based on the minimum weighted distance on the constraint manifold. The proposed method not only offers a generalization of the conventional formula for handling constraints and data inconsistency, but also provides a new insight into data fusion in terms of a geometric-algebraic point of view. Simulation results are provided to show the effectiveness of the proposed method in handling constraints and data inconsistency.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Seasonal prediction skill of winter temperature over North India
NASA Astrophysics Data System (ADS)
Tiwari, P. R.; Kar, S. C.; Mohanty, U. C.; Dey, S.; Kumari, S.; Sinha, P.
2016-04-01
The climatology, amplitude error, phase error, and mean square skill score (MSSS) of temperature predictions from five different state-of-the-art general circulation models (GCMs) have been examined for the winter (December-January-February) seasons over North India. In this region, temperature variability affects the phenological development processes of wheat crops and the grain yield. The GCM forecasts of temperature for a whole season issued in November from various organizations are compared with observed gridded temperature data obtained from the India Meteorological Department (IMD) for the period 1982-2009. The MSSS indicates that the models have skills of varying degrees. Predictions of maximum and minimum temperature obtained from the National Centers for Environmental Prediction (NCEP) climate forecast system model (NCEP_CFSv2) are compared with station level observations from the Snow and Avalanche Study Establishment (SASE). It has been found that when the model temperatures are corrected to account the bias in the model and actual orography, the predictions are able to delineate the observed trend compared to the trend without orography correction.
Beamforming Based Full-Duplex for Millimeter-Wave Communication
Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen
2016-01-01
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Koláčková, Pavla; Růžičková, Gabriela; Gregor, Tomáš; Šišperová, Eliška
2015-08-30
Calibration models for the Fourier transform-near infrared (FT-NIR) instrument were developed for quick and non-destructive determination of oil and fatty acids in whole achenes of milk thistle. Samples with a range of oil and fatty acid levels were collected and their transmittance spectra were obtained by the FT-NIR instrument. Based on these spectra and data gained by the means of the reference method - Soxhlet extraction and gas chromatography (GC) - calibration models were created by means of partial least square (PLS) regression analysis. Precision and accuracy of the calibration models was verified via the cross-validation of validation samples whose spectra were not part of the calibration model and also according to the root mean square error of prediction (RMSEP), root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV) and the validation coefficient of determination (R(2) ). R(2) for whole seeds were 0.96, 0.96, 0.83 and 0.67 and the RMSEP values were 0.76, 1.68, 1.24, 0.54 for oil, linoleic (C18:2), oleic (C18:1) and palmitic (C16:0) acids, respectively. The calibration models are appropriate for the non-destructive determination of oil and fatty acids levels in whole seeds of milk thistle. © 2014 Society of Chemical Industry.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Validating Clusters with the Lower Bound for Sum-of-Squares Error
ERIC Educational Resources Information Center
Steinley, Douglas
2007-01-01
Given that a minor condition holds (e.g., the number of variables is greater than the number of clusters), a nontrivial lower bound for the sum-of-squares error criterion in K-means clustering is derived. By calculating the lower bound for several different situations, a method is developed to determine the adequacy of cluster solution based on…
Mauda, R.; Pinchas, M.
2014-01-01
Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2017-06-01
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.
A degree-day model of sheep grazing influence on alfalfa weevil and crop characteristics.
Goosey, Hayes B
2012-02-01
Domestic sheep (Ovis spp.) grazing is emerging as an integrated pest management tactic for alfalfa weevil, Hypera postica (Gyllenhal), management and a degree-day model is needed as a decision and support tool. In response to this need, grazing exclosures with unique degree-days and stocking rates were established at weekly intervals in a central Montana alfalfa field during 2008 and 2009. Analyses indicate that increased stocking rates and grazing degree-days were associated with decreased crop levels of weevil larvae. Larval data collected from grazing treatments were regressed against on-site and near-site temperatures that produced the same accuracy. The near-site model was chosen to encourage producer acceptance. The regression slope differed from zero, had an r2 of 0.83, and a root mean square error of 0.2. Crop data were collected to achieve optimal weevil management with forage quality and yield. Differences were recorded in crude protein, acid and neutral detergent fibers, total digestible nutrients, and mean stage by weight. Stem heights differed with higher stocking rates and degree-days recording the shortest alfalfa canopy height at harvest. The degree-day model was validated at four sites during 2010 with a mean square prediction error of 0.74. The recommendation from this research is to stock alfalfa fields in the spring before 63 DD with rates between 251 and 583 sheep days per hectare (d/ha). Sheep should be allowed to graze to a minimum of 106 and maximum of 150 DD before removal. This model gives field entomologists a new method for implementing grazing in an integrated pest management program.
NASA Astrophysics Data System (ADS)
Gidey, Amanuel
2018-06-01
Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.
A nonlinear model of gold production in Malaysia
NASA Astrophysics Data System (ADS)
Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi
2014-06-01
Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.
Atlas of interoccurrence intervals for selected thresholds of daily precipitation in Texas
Asquith, William H.; Roussel, Meghan C.
2003-01-01
A Poisson process model is used to define the distribution of interoccurrence intervals of daily precipitation in Texas. A precipitation interoccurrence interval is the time period between two successive rainfall events. Rainfall events are defined as daily precipitation equaling or exceeding a specified depth threshold. Ten precipitation thresholds are considered: 0.05, 0.10, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 2.5, and 3.0 inches. Site-specific mean interoccurrence interval and ancillary statistics are presented for each threshold and for each of 1,306 National Weather Service daily precipitation gages. Maps depicting the spatial variation across Texas of the mean interoccurrence interval for each threshold are presented. The percent change from the statewide standard deviation of the interoccurrence intervals to the root-mean-square error ranges from a magnitude minimum of (negative) -24 to a magnitude maximum of -60 percent for the 0.05- and 2.0-inch thresholds, respectively. Because of the substantial negative percent change, the maps are considered more reliable estimators of the mean interoccurrence interval for most locations in Texas than the statewide mean values.
Ebtehaj, Isa; Bonakdari, Hossein
2016-01-01
Sediment transport without deposition is an essential consideration in the optimum design of sewer pipes. In this study, a novel method based on a combination of support vector regression (SVR) and the firefly algorithm (FFA) is proposed to predict the minimum velocity required to avoid sediment settling in pipe channels, which is expressed as the densimetric Froude number (Fr). The efficiency of support vector machine (SVM) models depends on the suitable selection of SVM parameters. In this particular study, FFA is used by determining these SVM parameters. The actual effective parameters on Fr calculation are generally identified by employing dimensional analysis. The different dimensionless variables along with the models are introduced. The best performance is attributed to the model that employs the sediment volumetric concentration (C(V)), ratio of relative median diameter of particles to hydraulic radius (d/R), dimensionless particle number (D(gr)) and overall sediment friction factor (λ(s)) parameters to estimate Fr. The performance of the SVR-FFA model is compared with genetic programming, artificial neural network and existing regression-based equations. The results indicate the superior performance of SVR-FFA (mean absolute percentage error = 2.123%; root mean square error =0.116) compared with other methods.
Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.
2012-01-01
The U.S. Geological Survey (USGS) maintains approximately 148 real-time streamgages in Iowa for which daily mean streamflow information is available, but daily mean streamflow data commonly are needed at locations where no streamgages are present. Therefore, the USGS conducted a study as part of a larger project in cooperation with the Iowa Department of Natural Resources to develop methods to estimate daily mean streamflow at locations in ungaged watersheds in Iowa by using two regression-based statistical methods. The regression equations for the statistical methods were developed from historical daily mean streamflow and basin characteristics from streamgages within the study area, which includes the entire State of Iowa and adjacent areas within a 50-mile buffer of Iowa in neighboring states. Results of this study can be used with other techniques to determine the best method for application in Iowa and can be used to produce a Web-based geographic information system tool to compute streamflow estimates automatically. The Flow Anywhere statistical method is a variation of the drainage-area-ratio method, which transfers same-day streamflow information from a reference streamgage to another location by using the daily mean streamflow at the reference streamgage and the drainage-area ratio of the two locations. The Flow Anywhere method modifies the drainage-area-ratio method in order to regionalize the equations for Iowa and determine the best reference streamgage from which to transfer same-day streamflow information to an ungaged location. Data used for the Flow Anywhere method were retrieved for 123 continuous-record streamgages located in Iowa and within a 50-mile buffer of Iowa. The final regression equations were computed by using either left-censored regression techniques with a low limit threshold set at 0.1 cubic feet per second (ft3/s) and the daily mean streamflow for the 15th day of every other month, or by using an ordinary-least-squares multiple linear regression method and the daily mean streamflow for the 15th day of every other month. The Flow Duration Curve Transfer method was used to estimate unregulated daily mean streamflow from the physical and climatic characteristics of gaged basins. For the Flow Duration Curve Transfer method, daily mean streamflow quantiles at the ungaged site were estimated with the parameter-based regression model, which results in a continuous daily flow-duration curve (the relation between exceedance probability and streamflow for each day of observed streamflow) at the ungaged site. By the use of a reference streamgage, the Flow Duration Curve Transfer is converted to a time series. Data used in the Flow Duration Curve Transfer method were retrieved for 113 continuous-record streamgages in Iowa and within a 50-mile buffer of Iowa. The final statewide regression equations for Iowa were computed by using a weighted-least-squares multiple linear regression method and were computed for the 0.01-, 0.05-, 0.10-, 0.15-, 0.20-, 0.30-, 0.40-, 0.50-, 0.60-, 0.70-, 0.80-, 0.85-, 0.90-, and 0.95-exceedance probability statistics determined from the daily mean streamflow with a reporting limit set at 0.1 ft3/s. The final statewide regression equation for Iowa computed by using left-censored regression techniques was computed for the 0.99-exceedance probability statistic determined from the daily mean streamflow with a low limit threshold and a reporting limit set at 0.1 ft3/s. For the Flow Anywhere method, results of the validation study conducted by using six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 1,016 to 138 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 1,690 to 237 ft3/s. Values of the percent root-mean-square error ranged from 115 percent to 26.2 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and estimated streamflow, streamflows appear to be substantially underestimated for much of the time period. Estimated cumulative streamflow for the period October 1, 2004, to September 30, 2009, are underestimated by -9.3 and -22.7 percent for the closest and poorest comparisons, respectively.
Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.
2008-01-01
A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).
Rapid Detection of Volatile Oil in Mentha haplocalyx by Near-Infrared Spectroscopy and Chemometrics.
Yan, Hui; Guo, Cheng; Shao, Yang; Ouyang, Zhen
2017-01-01
Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . The effects of data pre-processing methods on the accuracy of the PLSR calibration models were investigated. The performance of the final model was evaluated according to the correlation coefficient ( R ) and root mean square error of prediction (RMSEP). For PLSR model, the best preprocessing method combination was first-order derivative, standard normal variate transformation (SNV), and mean centering, which had of 0.8805, of 0.8719, RMSEC of 0.091, and RMSEP of 0.097, respectively. The wave number variables linking to volatile oil are from 5500 to 4000 cm-1 by analyzing the loading weights and variable importance in projection (VIP) scores. For SVM model, six LVs (less than seven LVs in PLSR model) were adopted in model, and the result was better than PLSR model. The and were 0.9232 and 0.9202, respectively, with RMSEC and RMSEP of 0.084 and 0.082, respectively, which indicated that the predicted values were accurate and reliable. This work demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in M. haplocalyx . The quality of medicine directly links to clinical efficacy, thus, it is important to control the quality of Mentha haplocalyx . Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . For SVM model, 6 LVs (less than 7 LVs in PLSR model) were adopted in model, and the result was better than PLSR model. It demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in Mentha haplocalyx . Abbreviations used: 1 st der: First-order derivative; 2 nd der: Second-order derivative; LOO: Leave-one-out; LVs: Latent variables; MC: Mean centering, NIR: Near-infrared; NIRS: Near infrared spectroscopy; PCR: Principal component regression, PLSR: Partial least squares regression; RBF: Radial basis function; RMSEC: Root mean square error of cross validation, RMSEC: Root mean square error of calibration; RMSEP: Root mean square error of prediction; SNV: Standard normal variate transformation; SVM: Support vector machine; VIP: Variable Importance in projection.
A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong
2001-01-01
This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed
2017-11-01
The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.
Sampling biases in datasets of historical mean air temperature over land.
Wang, Kaicun
2014-04-10
Global mean surface air temperature (Ta) has been reported to have risen by 0.74°C over the last 100 years. However, the definition of mean Ta is still a subject of debate. The most defensible definition might be the integral of the continuous temperature measurements over a day (Td0). However, for technological and historical reasons, mean Ta over land have been taken to be the average of the daily maximum and minimum temperature measurements (Td1). All existing principal global temperature analyses over land rely heavily on Td1. Here, I make a first quantitative assessment of the bias in the use of Td1 to estimate trends of mean Ta using hourly Ta observations at 5600 globally distributed weather stations from the 1970s to 2013. I find that the use of Td1 has a negligible impact on the global mean warming rate. However, the trend of Td1 has a substantial bias at regional and local scales, with a root mean square error of over 25% at 5° × 5° grids. Therefore, caution should be taken when using mean Ta datasets based on Td1 to examine high resolution details of warming trends.
Peroni, M; Golland, P; Sharp, G C; Baroni, G
2011-01-01
Deformable Image Registration is a complex optimization algorithm with the goal of modeling a non-rigid transformation between two images. A crucial issue in this field is guaranteeing the user a robust but computationally reasonable algorithm. We rank the performances of four stopping criteria and six stopping value computation strategies for a log domain deformable registration. The stopping criteria we test are: (a) velocity field update magnitude, (b) vector field Jacobian, (c) mean squared error, and (d) harmonic energy. Experiments demonstrate that comparing the metric value over the last three iterations with the metric minimum of between four and six previous iterations is a robust and appropriate strategy. The harmonic energy and vector field update magnitude metrics give the best results in terms of robustness and speed of convergence.
NASA Astrophysics Data System (ADS)
Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Dinis, Rui
A MMSE (Minimum Mean Square Error) DS-CDMA (Direct Sequence-Code Division Multiple Access) receiver coupled with a low-complexity iterative interference suppression algorithm was devised for a MIMO/BLAST (Multiple Input, Multiple Output / Bell Laboratories Layered Space Time) system in order to improve system performance, considering frequency selective fading channels. The scheme is compared against the simple MMSE receiver, for both QPSK and 16QAM modulations, under SISO (Single Input, Single Output) and MIMO systems, the latter with 2Tx by 2Rx and 4Tx by 4Rx (MIMO order 2 and 4 respectively) antennas. To assess its performance in an existing system, the uncoded UMTS HSDPA (High Speed Downlink Packet Access) standard was considered.
The analytical design of spectral measurements for multispectral remote sensor systems
NASA Technical Reports Server (NTRS)
Wiersma, D. J.; Landgrebe, D. A. (Principal Investigator)
1979-01-01
The author has identified the following significant results. In order to choose a design which will be optimal for the largest class of remote sensing problems, a method was developed which attempted to represent the spectral response function from a scene as accurately as possible. The performance of the overall recognition system was studied relative to the accuracy of the spectral representation. The spectral representation was only one of a set of five interrelated parameter categories which also included the spatial representation parameter, the signal to noise ratio, ancillary data, and information classes. The spectral response functions observed from a stratum were modeled as a stochastic process with a Gaussian probability measure. The criterion for spectral representation was defined by the minimum expected mean-square error.
Compensating for estimation smoothing in kriging
Olea, R.A.; Pawlowsky, Vera
1996-01-01
Smoothing is a characteristic inherent to all minimum mean-square-error spatial estimators such as kriging. Cross-validation can be used to detect and model such smoothing. Inversion of the model produces a new estimator-compensated kriging. A numerical comparison based on an exhaustive permeability sampling of a 4-fr2 slab of Berea Sandstone shows that the estimation surface generated by compensated kriging has properties intermediate between those generated by ordinary kriging and stochastic realizations resulting from simulated annealing and sequential Gaussian simulation. The frequency distribution is well reproduced by the compensated kriging surface, which also approximates the experimental semivariogram well - better than ordinary kriging, but not as well as stochastic realizations. Compensated kriging produces surfaces that are more accurate than stochastic realizations, but not as accurate as ordinary kriging. ?? 1996 International Association for Mathematical Geology.
NASA Technical Reports Server (NTRS)
Sireteanu, T.
1974-01-01
An oscillating system with quadratic damping subjected to white noise excitation is replaced by a nonlinear, statistically equivalent system for which the associated Fokker-Planck equation can be exactly solved. The mean square responses are calculated and the optimum damping coefficient is determined with respect to the minimum mean square acceleration criteria. An application of these results to the optimization of automobile suspension damping is given.
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Evrendilek, Fatih
2007-12-12
This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.
Uncertainties in extracted parameters of a Gaussian emission line profile with continuum background.
Minin, Serge; Kamalabadi, Farzad
2009-12-20
We derive analytical equations for uncertainties in parameters extracted by nonlinear least-squares fitting of a Gaussian emission function with an unknown continuum background component in the presence of additive white Gaussian noise. The derivation is based on the inversion of the full curvature matrix (equivalent to Fisher information matrix) of the least-squares error, chi(2), in a four-variable fitting parameter space. The derived uncertainty formulas (equivalent to Cramer-Rao error bounds) are found to be in good agreement with the numerically computed uncertainties from a large ensemble of simulated measurements. The derived formulas can be used for estimating minimum achievable errors for a given signal-to-noise ratio and for investigating some aspects of measurement setup trade-offs and optimization. While the intended application is Fabry-Perot spectroscopy for wind and temperature measurements in the upper atmosphere, the derivation is generic and applicable to other spectroscopy problems with a Gaussian line shape.
Pappas, Christos; Kyraleou, Maria; Voskidi, Eleni; Kotseridis, Yorgos; Taranilis, Petros A; Kallithraka, Stamatina
2015-02-01
The direct and simultaneous quantitative determination of the mean degree of polymerization (mDP) and the degree of galloylation (%G) in grape seeds were quantified using diffuse reflectance infrared Fourier transform spectroscopy and partial least squares (PLS). The results were compared with those obtained using the conventional analysis employing phloroglucinolysis as pretreatment followed by high performance liquid chromatography-UV and mass spectrometry detection. Infrared spectra were recorded in solid state samples after freeze drying. The 2nd derivative of the 1832 to 1416 and 918 to 739 cm(-1) spectral regions for the quantification of mDP, the 2nd derivative of the 1813 to 607 cm(-1) spectral region for the degree of %G determination and PLS regression were used. The determination coefficients (R(2) ) of mDP and %G were 0.99 and 0.98, respectively. The corresponding values of the root-mean-square error of calibration were found 0.506 and 0.692, the root-mean-square error of cross validation 0.811 and 0.921, and the root-mean-square error of prediction 0.612 and 0.801. The proposed method in comparison with the conventional method is simpler, less time consuming, more economical, and requires reduced quantities of chemical reagents and fewer sample pretreatment steps. It could be a starting point for the design of more specific models according to the requirements of the wineries. © 2015 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2017-07-01
Soil temperature (T s) and its thermal regime are the most important factors in plant growth, biological activities, and water movement in soil. Due to scarcity of the T s data, estimation of soil temperature is an important issue in different fields of sciences. The main objective of the present study is to investigate the accuracy of multivariate adaptive regression splines (MARS) and support vector machine (SVM) methods for estimating the T s. For this aim, the monthly mean data of the T s (at depths of 5, 10, 50, and 100 cm) and meteorological parameters of 30 synoptic stations in Iran were utilized. To develop the MARS and SVM models, various combinations of minimum, maximum, and mean air temperatures (T min, T max, T); actual and maximum possible sunshine duration; sunshine duration ratio (n, N, n/N); actual, net, and extraterrestrial solar radiation data (R s, R n, R a); precipitation (P); relative humidity (RH); wind speed at 2 m height (u 2); and water vapor pressure (Vp) were used as input variables. Three error statistics including root-mean-square-error (RMSE), mean absolute error (MAE), and determination coefficient (R 2) were used to check the performance of MARS and SVM models. The results indicated that the MARS was superior to the SVM at different depths. In the test and validation phases, the most accurate estimations for the MARS were obtained at the depth of 10 cm for T max, T min, T inputs (RMSE = 0.71 °C, MAE = 0.54 °C, and R 2 = 0.995) and for RH, V p, P, and u 2 inputs (RMSE = 0.80 °C, MAE = 0.61 °C, and R 2 = 0.996), respectively.
A network application for modeling a centrifugal compressor performance map
NASA Astrophysics Data System (ADS)
Nikiforov, A.; Popova, D.; Soldatova, K.
2017-08-01
The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.
Evaluation of dynamic electromagnetic tracking deviation
NASA Astrophysics Data System (ADS)
Hummel, Johann; Figl, Michael; Bax, Michael; Shahidi, Ramin; Bergmann, Helmar; Birkfellner, Wolfgang
2009-02-01
Electromagnetic tracking systems (EMTS's) are widely used in clinical applications. Many reports have evaluated their static behavior and errors caused by metallic objects were examined. Although there exist some publications concerning the dynamic behavior of EMTS's the measurement protocols are either difficult to reproduce with respect of the movement path or only accomplished at high technical effort. Because dynamic behavior is of major interest with respect to clinical applications we established a simple but effective modal measurement easy to repeat at other laboratories. We built a simple pendulum where the sensor of our EMTS (Aurora, NDI, CA) could be mounted. The pendulum was mounted on a special bearing to guarantee that the pendulum path is planar. This assumption was tested before starting the measurements. All relevant parameters defining the pendulum motion such as rotation center and length are determined by static measurement at satisfactory accuracy. Then position and orientation data were gathered over a time period of 8 seconds and timestamps were recorded. Data analysis provided a positioning error and an overall error combining both position and orientation. All errors were calculated by means of the well know equations concerning pendulum movement. Additionally, latency - the elapsed time from input motion until the immediate consequences of that input are available - was calculated using well-known equations for mechanical pendulums for different velocities. We repeated the measurements with different metal objects (rods made of stainless steel type 303 and 416) between field generator and pendulum. We found a root mean square error (eRMS) of 1.02mm with respect to the distance of the sensor position to the fit plane (maximum error emax = 2.31mm, minimum error emin = -2.36mm). The eRMS for positional error amounted to 1.32mm while the overall error was 3.24 mm. The latency at a pendulum angle of 0° (vertical) was 7.8ms.
Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter
2018-03-20
Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.
Design of minimum multiplier fractional order differentiator based on lattice wave digital filter.
Barsainya, Richa; Rawat, Tarun Kumar; Kumar, Manjeet
2017-01-01
In this paper, a novel design of fractional order differentiator (FOD) based on lattice wave digital filter (LWDF) is proposed which requires minimum number of multiplier for its structural realization. Firstly, the FOD design problem is formulated as an optimization problem using the transfer function of lattice wave digital filter. Then, three optimization algorithms, namely, genetic algorithm (GA), particle swarm optimization (PSO) and cuckoo search algorithm (CSA) are applied to determine the optimal LWDF coefficients. The realization of FOD using LWD structure increases the design accuracy, as only N number of coefficients are to be optimized for Nth order FOD. Finally, two design examples of 3rd and 5th order lattice wave digital fractional order differentiator (LWDFOD) are demonstrated to justify the design accuracy. The performance analysis of the proposed design is carried out based on magnitude response, absolute magnitude error (dB), root mean square (RMS) magnitude error, arithmetic complexity, convergence profile and computation time. Simulation results are attained to show the comparison of the proposed LWDFOD with the published works and it is observed that an improvement of 29% is obtained in the proposed design. The proposed LWDFOD approximates the ideal FOD and surpasses the existing ones reasonably well in mid and high frequency range, thereby making the proposed LWDFOD a promising technique for the design of digital FODs. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
[NIR Assignment of Magnolol by 2D-COS Technology and Model Application Huoxiangzhengqi Oral Liduid].
Pei, Yan-ling; Wu, Zhi-sheng; Shi, Xin-yuan; Pan, Xiao-ning; Peng, Yan-fang; Qiao, Yan-jiang
2015-08-01
Near infrared (NIR) spectroscopy assignment of Magnolol was performed using deuterated chloroform solvent and two-dimensional correlation spectroscopy (2D-COS) technology. According to the synchronous spectra of deuterated chloroform solvent and Magnolol, 1365~1455, 1600~1720, 2000~2181 and 2275~2465 nm were the characteristic absorption of Magnolol. Connected with the structure of Magnolol, 1440 nm was the stretching vibration of phenolic group O-H, 1679 nm was the stretching vibration of aryl and methyl which connected with aryl, 2117, 2304, 2339 and 2370 nm were the combination of the stretching vibration, bending vibration and deformation vibration for aryl C-H, 2445 nm were the bending vibration of methyl which linked with aryl group, these bands attribut to the characteristics of Magnolol. Huoxiangzhengqi Oral Liduid was adopted to study the Magnolol, the characteristic band by spectral assignment and the band by interval Partial Least Squares (iPLS) and Synergy interval Partial Least Squares (SiPLS) were used to establish Partial Least Squares (PLS) quantitative model, the coefficient of determination Rcal(2) and Rpre(2) were greater than 0.99, the Root Mean of Square Error of Calibration (RM-SEC), Root Mean of Square Error of Cross Validation (RMSECV) and Root Mean of Square Error of Prediction (RMSEP) were very small. It indicated that the characteristic band by spectral assignment has the same results with the Chemometrics in PLS model. It provided a reference for NIR spectral assignment of chemical compositions in Chinese Materia Medica, and the band filters of NIR were interpreted.
Zhang, Bao; Yao, Yibin; Fok, Hok Sum; Hu, Yufeng; Chen, Qiang
2016-01-01
This study uses the observed vertical displacements of Global Positioning System (GPS) time series obtained from the Crustal Movement Observation Network of China (CMONOC) with careful pre- and post-processing to estimate the seasonal crustal deformation in response to the hydrological loading in lower three-rivers headwater region of southwest China, followed by inferring the annual EWH changes through geodetic inversion methods. The Helmert Variance Component Estimation (HVCE) and the Minimum Mean Square Error (MMSE) criterion were successfully employed. The GPS inferred EWH changes agree well qualitatively with the Gravity Recovery and Climate Experiment (GRACE)-inferred and the Global Land Data Assimilation System (GLDAS)-inferred EWH changes, with a discrepancy of 3.2–3.9 cm and 4.8–5.2 cm, respectively. In the research areas, the EWH changes in the Lancang basin is larger than in the other regions, with a maximum of 21.8–24.7 cm and a minimum of 3.1–6.9 cm. PMID:27657064
Medina, K.D.; Tasker, Gary D.
1985-01-01
The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.
2013-09-01
This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.
Analysis of S-box in Image Encryption Using Root Mean Square Error Method
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-07-01
The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes
Application of quadratic optimization to supersonic inlet control.
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Zeller, J. R.
1972-01-01
This paper describes the application of linear stochastic optimal control theory to the design of the control system for the air intake, the inlet, of a supersonic air-breathing propulsion system. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant controllers are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain a linear controller that minimizes the nonquadratic index. The two controllers are compared on the basis of unstart prevention, control effort requirements, and frequency response. It is concluded that while controls designed to minimize unstarts are desirable in that the index minimized is physically meaningful, computation time required is longer than for the minimum mean square shock position approach. The simpler minimum mean square shock position solution produced expected unstart frequency values which were not significantly larger than those of the nonquadratic solution.
Spectral combination of spherical gravitational curvature boundary-value problems
NASA Astrophysics Data System (ADS)
PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel
2018-04-01
Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
Determining the Uncertainty of X-Ray Absorption Measurements
Wojcik, Gary S.
2004-01-01
X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627
Huang, Xinchuan; Schwenke, David W; Lee, Timothy J
2011-01-28
In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Water balance models in one-month-ahead streamflow forecasting
Alley, William M.
1985-01-01
Techniques are tested that incorporate information from water balance models in making 1-month-ahead streamflow forecasts in New Jersey. The results are compared to those based on simple autoregressive time series models. The relative performance of the models is dependent on the month of the year in question. The water balance models are most useful for forecasts of April and May flows. For the stations in northern New Jersey, the April and May forecasts were made in order of decreasing reliability using the water-balance-based approaches, using the historical monthly means, and using simple autoregressive models. The water balance models were useful to a lesser extent for forecasts during the fall months. For the rest of the year the improvements in forecasts over those obtained using the simpler autoregressive models were either very small or the simpler models provided better forecasts. When using the water balance models, monthly corrections for bias are found to improve minimum mean-square-error forecasts as well as to improve estimates of the forecast conditional distributions.
Design of adaptive control systems by means of self-adjusting transversal filters
NASA Technical Reports Server (NTRS)
Merhav, S. J.
1986-01-01
The design of closed-loop adaptive control systems based on nonparametric identification was addressed. Implementation is by self-adjusting Least Mean Square (LMS) transversal filters. The design concept is Model Reference Adaptive Control (MRAC). Major issues are to preserve the linearity of the error equations of each LMS filter, and to prevent estimation bias that is due to process or measurement noise, thus providing necessary conditions for the convergence and stability of the control system. The controlled element is assumed to be asymptotically stable and minimum phase. Because of the nonparametric Finite Impulse Response (FIR) estimates provided by the LMS filters, a-priori information on the plant model is needed only in broad terms. Following a survey of control system configurations and filter design considerations, system implementation is shown here in Single Input Single Output (SISO) format which is readily extendable to multivariable forms. In extensive computer simulation studies the controlled element is represented by a second-order system with widely varying damping, natural frequency, and relative degree.
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui
2015-05-01
Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1976-01-01
The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.
Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
NASA Astrophysics Data System (ADS)
Ubaidulla, P.; Chockalingam, A.
2009-12-01
We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
Teng, C-C; Chai, H; Lai, D-M; Wang, S-F
2007-02-01
Previous research has shown that there is no significant relationship between the degree of structural degeneration of the cervical spine and neck pain. We therefore sought to investigate the potential role of sensory dysfunction in chronic neck pain. Cervicocephalic kinesthetic sensibility, expressed by how accurately an individual can reposition the head, was studied in three groups of individuals, a control group of 20 asymptomatic young adults and two groups of middle-aged adults (20 subjects in each group) with or without a history of mild neck pain. An ultrasound-based three-dimensional coordinate measuring system was used to measure the position of the head and to test the accuracy of repositioning. Constant error (indicating that the subject overshot or undershot the intended position) and root mean square errors (representing total errors of accuracy and variability) were measured during repositioning of the head to the neutral head position (Head-to-NHP) and repositioning of the head to the target (Head-to-Target) in three cardinal planes (sagittal, transverse, and frontal). Analysis of covariance (ANCOVA) was used to test the group effect, with age used as a covariate. The constant errors during repositioning from a flexed position and from an extended position to the NHP were significantly greater in the middle-aged subjects than in the control group (beta=0.30 and beta=0.60, respectively; P<0.05 for both). In addition, the root mean square errors during repositioning from a flexed or extended position to the NHP were greater in the middle-aged subjects than in the control group (beta=0.27 and beta=0.49, respectively; P<0.05 for both). The root mean square errors also increased during Head-to-Target in left rotation (beta=0.24;P<0.05), but there was no difference in the constant errors or root mean square errors during Head-to-NHP repositioning from other target positions (P>0.05). The results indicate that, after controlling for age as a covariate, there was no group effect. Thus, age appears to have a profound effect on an individual's ability to accurately reposition the head toward the neutral position in the sagittal plane and repositioning the head toward left rotation. A history of mild chronic neck pain alone had no significant effect on cervicocephalic kinesthetic sensibility.
Sayago, Ana; Asuero, Agustin G
2006-09-14
A bilogarithmic hyperbolic cosine method for the spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data has been devised and applied to literature data. A weighting scheme, however, is necessary in order to take into account the transformation for linearization. The method may be considered a useful alternative to methods in which one variable is involved on both sides of the basic equation (i.e. Heller and Schwarzenbach, Likussar and Adsul and Ramanathan). Classical least squares lead in those instances to biased and approximate stability constants and limiting absorbance values. The advantages of the proposed method are: the method gives a clear indication of the existence of only one complex in solution, it is flexible enough to allow for weighting of measurements and the computation procedure yield the best value of logbeta11 and its limit of error. The agreement between the values obtained by applying the weighted hyperbolic cosine method and the non-linear regression (NLR) method is good, being in both cases the mean quadratic error at a minimum.
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.
2017-10-01
A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.
LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure
NASA Astrophysics Data System (ADS)
Wang, Qing; Wu, Hao; Ihme, Matthias
2015-11-01
The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.
NASA Astrophysics Data System (ADS)
Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza
2012-12-01
In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time
[Gaussian process regression and its application in near-infrared spectroscopy analysis].
Feng, Ai-Ming; Fang, Li-Min; Lin, Min
2011-06-01
Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.
Basalekou, M.; Pappas, C.; Kotseridis, Y.; Tarantilis, P. A.; Kontaxakis, E.
2017-01-01
Color, phenolic content, and chemical age values of red wines made from Cretan grape varieties (Kotsifali, Mandilari) were evaluated over nine months of maturation in different containers for two vintages. The wines differed greatly on their anthocyanin profiles. Mid-IR spectra were also recorded with the use of a Fourier Transform Infrared Spectrophotometer in ZnSe disk mode. Analysis of Variance was used to explore the parameter's dependency on time. Determination models were developed for the chemical age indexes using Partial Least Squares (PLS) (TQ Analyst software) considering the spectral region 1830–1500 cm−1. The correlation coefficients (r) for chemical age index i were 0.86 for Kotsifali (Root Mean Square Error of Calibration (RMSEC) = 0.067, Root Mean Square Error of Prediction (RMSEP) = 0,115, and Root Mean Square Error of Validation (RMSECV) = 0.164) and 0.90 for Mandilari (RMSEC = 0.050, RMSEP = 0.040, and RMSECV = 0.089). For chemical age index ii the correlation coefficients (r) were 0.86 and 0.97 for Kotsifali (RMSEC 0.044, RMSEP = 0.087, and RMSECV = 0.214) and Mandilari (RMSEC = 0.024, RMSEP = 0.033, and RMSECV = 0.078), respectively. The proposed method is simpler, less time consuming, and more economical and does not require chemical reagents. PMID:29225994
Application of near-infrared spectroscopy for the rapid quality assessment of Radix Paeoniae Rubra
NASA Astrophysics Data System (ADS)
Zhan, Hao; Fang, Jing; Tang, Liying; Yang, Hongjun; Li, Hua; Wang, Zhuju; Yang, Bin; Wu, Hongwei; Fu, Meihong
2017-08-01
Near-infrared (NIR) spectroscopy with multivariate analysis was used to quantify gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra, and the feasibility to classify the samples originating from different areas was investigated. A new high-performance liquid chromatography method was developed and validated to analyze gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra as the reference. Partial least squares (PLS), principal component regression (PCR), and stepwise multivariate linear regression (SMLR) were performed to calibrate the regression model. Different data pretreatments such as derivatives (1st and 2nd), multiplicative scatter correction, standard normal variate, Savitzky-Golay filter, and Norris derivative filter were applied to remove the systematic errors. The performance of the model was evaluated according to the root mean square of calibration (RMSEC), root mean square error of prediction (RMSEP), root mean square error of cross-validation (RMSECV), and correlation coefficient (r). The results show that compared to PCR and SMLR, PLS had a lower RMSEC, RMSECV, and RMSEP and higher r for all the four analytes. PLS coupled with proper pretreatments showed good performance in both the fitting and predicting results. Furthermore, the original areas of Radix Paeoniae Rubra samples were partly distinguished by principal component analysis. This study shows that NIR with PLS is a reliable, inexpensive, and rapid tool for the quality assessment of Radix Paeoniae Rubra.
NASA Astrophysics Data System (ADS)
Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.
2016-12-01
A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in the model making them more difficult to interpret but highlighting the usefulness of the non-linear machine learning method. 2D interaction plots show probability of anoxic groundwater conditions largely control estimated nitrate concentrations compared to the other predictors.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Skinner, Kenneth D.
2009-01-01
Elevation data in riverine environments can be used in various applications for which different levels of accuracy are required. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging) - or EAARL - system was used to obtain topographic and bathymetric data along the lower Boise River, southwestern Idaho, for use in hydraulic and habitat modeling. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL data collection, real-time kinetic global positioning system and total station ground-survey data were collected in three areas within the lower Boise River basin to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived elevation data, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.082 to 0.138 m. Accuracies for bank, floodplain, and in-stream bathymetric data had root mean square errors ranging from 0.090 to 0.583 m. The greater root mean square errors for the latter data are the result of high levels of turbidity in the downstream ground-survey area, dense tree canopy, and horizontal location discrepancies between the EAARL and ground-survey data in steeply sloping areas such as riverbanks. The EAARL point to ground-survey comparisons produced results similar to those for the EAARL raster to ground-survey comparisons, indicating that the interpolation of the EAARL points to rasters did not introduce significant additional error. The mean percent error for the wetted cross-sectional areas of the two upstream ground-survey areas was 1 percent. The mean percent error increases to -18 percent if the downstream ground-survey area is included, reflecting the influence of turbidity in that area.
Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.
2016-01-01
Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.
Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas
Lindgren, R.J.
2006-01-01
A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.
Computational intelligence models to predict porosity of tablets using minimum features
Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander
2017-01-01
The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space. PMID:28138223
Computational intelligence models to predict porosity of tablets using minimum features.
Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander
2017-01-01
The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space.
NASA Astrophysics Data System (ADS)
Ying, Yibin; Liu, Yande; Tao, Yang
2005-09-01
This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r) 0.940 for the SSC and a moderate r of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSC and also showed the feasibility of using it to predict the VA of apples.
Maritime Adaptive Optics Beam Control
2010-09-01
Liquid Crystal LMS Least Mean Square MIMO Multiple- Input Multiple-Output MMDM Micromachined Membrane Deformable Mirror MSE Mean Square Error...determine how the beam is distorted, a control computer to calculate the correction to be applied, and a corrective element, usually a deformable mirror ...during this research, an overview of the system modification is provided here. Using additional mirrors and reflecting the beam to and from an
Geodesy by radio interferometry: Water vapor radiometry for estimation of the wet delay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elgered, G.; Davis, J.L.; Herring, T.A.
1991-04-10
An important source of error in very-long-baseline interferometry (VLBI) estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. The authors present and discuss the method of using data from a water vapor readiometer (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data of Kalman filtering to correct for atmospheric propagation delay atmore » the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The lengths of the baselines range from 919 to 7,941 km. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. The use of WVR data yielded a 13% smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the best minimum elevation angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass. For use of WVR data along with accurate determinations of total surface pressure, the best minimum is about 20{degrees}; for use of a model for the wet delay based on the humidity and temperature at the ground, the best minimum is about 35{degrees}.« less
Comparison of artificial intelligence techniques for prediction of soil temperatures in Turkey
NASA Astrophysics Data System (ADS)
Citakoglu, Hatice
2017-10-01
Soil temperature is a meteorological data directly affecting the formation and development of plants of all kinds. Soil temperatures are usually estimated with various models including the artificial neural networks (ANNs), adaptive neuro-fuzzy inference system (ANFIS), and multiple linear regression (MLR) models. Soil temperatures along with other climate data are recorded by the Turkish State Meteorological Service (MGM) at specific locations all over Turkey. Soil temperatures are commonly measured at 5-, 10-, 20-, 50-, and 100-cm depths below the soil surface. In this study, the soil temperature data in monthly units measured at 261 stations in Turkey having records of at least 20 years were used to develop relevant models. Different input combinations were tested in the ANN and ANFIS models to estimate soil temperatures, and the best combination of significant explanatory variables turns out to be monthly minimum and maximum air temperatures, calendar month number, depth of soil, and monthly precipitation. Next, three standard error terms (mean absolute error (MAE, °C), root mean squared error (RMSE, °C), and determination coefficient ( R 2 )) were employed to check the reliability of the test data results obtained through the ANN, ANFIS, and MLR models. ANFIS (RMSE 1.99; MAE 1.09; R 2 0.98) is found to outperform both ANN and MLR (RMSE 5.80, 8.89; MAE 1.89, 2.36; R 2 0.93, 0.91) in estimating soil temperature in Turkey.
Validation of Core Temperature Estimation Algorithm
2016-01-29
plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.
Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C
2011-01-01
Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.
The theory precision analyse of RFM localization of satellite remote sensing imagery
NASA Astrophysics Data System (ADS)
Zhang, Jianqing; Xv, Biao
2009-11-01
The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.
Theoretical and experimental studies of error in square-law detector circuits
NASA Technical Reports Server (NTRS)
Stanley, W. D.; Hearn, C. P.; Williams, J. B.
1984-01-01
Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Wind adaptive modeling of transmission lines using minimum description length
NASA Astrophysics Data System (ADS)
Jaw, Yoonseok; Sohn, Gunho
2017-03-01
The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.
Performance metrics for the assessment of satellite data products: an ocean color case study
Seegers, Bridget N.; Stumpf, Richard P.; Schaeffer, Blake A.; Loftin, Keith A.; Werdell, P. Jeremy
2018-01-01
Performance assessment of ocean color satellite data has generally relied on statistical metrics chosen for their common usage and the rationale for selecting certain metrics is infrequently explained. Commonly reported statistics based on mean squared errors, such as the coefficient of determination (r2), root mean square error, and regression slopes, are most appropriate for Gaussian distributions without outliers and, therefore, are often not ideal for ocean color algorithm performance assessment, which is often limited by sample availability. In contrast, metrics based on simple deviations, such as bias and mean absolute error, as well as pair-wise comparisons, often provide more robust and straightforward quantities for evaluating ocean color algorithms with non-Gaussian distributions and outliers. This study uses a SeaWiFS chlorophyll-a validation data set to demonstrate a framework for satellite data product assessment and recommends a multi-metric and user-dependent approach that can be applied within science, modeling, and resource management communities. PMID:29609296
Nonparametric probability density estimation by optimization theoretic techniques
NASA Technical Reports Server (NTRS)
Scott, D. W.
1976-01-01
Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.
Discrete-time state estimation for stochastic polynomial systems over polynomial observations
NASA Astrophysics Data System (ADS)
Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.
2018-07-01
This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.
NASA Astrophysics Data System (ADS)
Jiang, L.
2017-12-01
Climate change is considered to be one of the greatest environmental threats. Global climate models (GCMs) are the primary tool used for studying climate change. However, GCMs are limited because of their coarse spatial resolution and inability to resolve important sub-grid scale features such as terrain and clouds. Statistical downscaling methods can be used to downscale large-scale variables to local-scale. In this study, we assess the applicability of the Statistical Downscaling Model (SDSM) in downscaling the outputs from Beijing Normal University Earth System Model (BNU-ESM). The study focus on the the Loess Plateau, China, and the variables for downscaling include daily mean temperature (TMEAN), maximum temperature (TMAX) and minimum temperature (TMIN). The results show that SDSM performs well for these three climatic variables on the Loess Plateau. After downscaling, the root mean square errors for TMEAN, TMAX, TMIN for BNU-ESM were reduced by 70.9%, 75.1%, and 67.2%, respectively. All the rates of change in TMEAN, TMAX and TMIN during the 21st century decreased after SDSM downscaling. We also show that SDSM can effectively reduce uncertainty, compared with the raw model outputs. TMEAN uncertainty was reduced by 27.1%, 26.8%, and 16.3% for the future scenarios of RCP 2.6, RCP 4.5 and RCP 8.5, respectively. The corresponding reductions in uncertainty were 23.6%, 30.7%, and 18.7% for TMAX; 37.6%, 31.8%, and 23.2% for TMIN.
NASA Astrophysics Data System (ADS)
Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng
2018-02-01
A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ying Yibin; Liu Yande; Tao Yang
2005-09-01
This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r{sup 2}) 0.940 for the SSC and a moderate r{sup 2} of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSCmore » and also showed the feasibility of using it to predict the VA of apples.« less
Synthesis of hover autopilots for rotary-wing VTOL aircraft
NASA Technical Reports Server (NTRS)
Hall, W. E.; Bryson, A. E., Jr.
1972-01-01
The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Cheevatanarak, Suchittra
Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
Durakli Velioglu, Serap; Ercioglu, Elif; Boyaci, Ismail Hakki
2017-05-01
This research paper describes the potential of synchronous fluorescence (SF) spectroscopy for authentication of buffalo milk, a favourable raw material in the production of some premium dairy products. Buffalo milk is subjected to fraudulent activities like many other high priced foodstuffs. The current methods widely used for the detection of adulteration of buffalo milk have various disadvantages making them unattractive for routine analysis. Thus, the aim of the present study was to assess the potential of SF spectroscopy in combination with multivariate methods for rapid discrimination between buffalo and cow milk and detection of the adulteration of buffalo milk with cow milk. SF spectra of cow and buffalo milk samples were recorded between 400-550 nm excitation range with Δλ of 10-100 nm, in steps of 10 nm. The data obtained for ∆λ = 10 nm were utilised to classify the samples using principal component analysis (PCA), and detect the adulteration level of buffalo milk with cow milk using partial least square (PLS) methods. Successful discrimination of samples and detection of adulteration of buffalo milk with limit of detection value (LOD) of 6% are achieved with the models having root mean square error of calibration (RMSEC) and the root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP) values of 2, 7, and 4%, respectively. The results reveal the potential of SF spectroscopy for rapid authentication of buffalo milk.
The impact of surface area, volume, curvature, and Lennard-Jones potential to solvation modeling.
Nguyen, Duc D; Wei, Guo-Wei
2017-01-05
This article explores the impact of surface area, volume, curvature, and Lennard-Jones (LJ) potential on solvation free energy predictions. Rigidity surfaces are utilized to generate robust analytical expressions for maximum, minimum, mean, and Gaussian curvatures of solvent-solute interfaces, and define a generalized Poisson-Boltzmann (GPB) equation with a smooth dielectric profile. Extensive correlation analysis is performed to examine the linear dependence of surface area, surface enclosed volume, maximum curvature, minimum curvature, mean curvature, and Gaussian curvature for solvation modeling. It is found that surface area and surfaces enclosed volumes are highly correlated to each other's, and poorly correlated to various curvatures for six test sets of molecules. Different curvatures are weakly correlated to each other for six test sets of molecules, but are strongly correlated to each other within each test set of molecules. Based on correlation analysis, we construct twenty six nontrivial nonpolar solvation models. Our numerical results reveal that the LJ potential plays a vital role in nonpolar solvation modeling, especially for molecules involving strong van der Waals interactions. It is found that curvatures are at least as important as surface area or surface enclosed volume in nonpolar solvation modeling. In conjugation with the GPB model, various curvature-based nonpolar solvation models are shown to offer some of the best solvation free energy predictions for a wide range of test sets. For example, root mean square errors from a model constituting surface area, volume, mean curvature, and LJ potential are less than 0.42 kcal/mol for all test sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
[Application of elastic registration based on Demons algorithm in cone beam CT].
Pang, Haowen; Sun, Xiaoyang
2014-02-01
We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
A Practical, Hardware Friendly MMSE Detector for MIMO-OFDM-Based Systems
NASA Astrophysics Data System (ADS)
Kim, Hun Seok; Zhu, Weijun; Bhatia, Jatin; Mohammed, Karim; Shah, Anish; Daneshrad, Babak
2008-12-01
Design and implementation of a highly optimized MIMO (multiple-input multiple-output) detector requires cooptimization of the algorithm with the underlying hardware architecture. Special attention must be paid to application requirements such as throughput, latency, and resource constraints. In this work, we focus on a highly optimized matrix inversion free [InlineEquation not available: see fulltext.] MMSE (minimum mean square error) MIMO detector implementation. The work has resulted in a real-time field-programmable gate array-based implementation (FPGA-) on a Xilinx Virtex-2 6000 using only 9003 logic slices, 66 multipliers, and 24 Block RAMs (less than 33% of the overall resources of this part). The design delivers over 420 Mbps sustained throughput with a small 2.77-microsecond latency. The designed [InlineEquation not available: see fulltext.] linear MMSE MIMO detector is capable of complying with the proposed IEEE 802.11n standard.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Ge, Jian-Hua
2012-12-01
Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.
Model of human dynamic orientation. Ph.D. Thesis; [associated with vestibular stimuli
NASA Technical Reports Server (NTRS)
Ormsby, C. C.
1974-01-01
The dynamics associated with the perception of orientation were modelled for near-threshold and suprathreshold vestibular stimuli. A model of the information available at the peripheral sensors which was consistent with available neurophysiologic data was developed and served as the basis for the models of the perceptual responses. The central processor was assumed to utilize the information from the peripheral sensors in an optimal (minimum mean square error) manner to produce the perceptual estimates of dynamic orientation. This assumption, coupled with the models of sensory information, determined the form of the model for the central processor. The problem of integrating information from the semi-circular canals and the otoliths to predict the perceptual response to motions which stimulated both organs was studied. A model was developed which was shown to be useful in predicting the perceptual response to multi-sensory stimuli.
Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform
NASA Astrophysics Data System (ADS)
Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li
2017-12-01
In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.
Separation of man-made and natural patterns in high-altitude imagery of agricultural areas
NASA Technical Reports Server (NTRS)
Samulon, A. S.
1975-01-01
A nonstationary linear digital filter is designed and implemented which extracts the natural features from high-altitude imagery of agricultural areas. Essentially, from an original image a new image is created which displays information related to soil properties, drainage patterns, crop disease, and other natural phenomena, and contains no information about crop type or row spacing. A model is developed to express the recorded brightness in a narrow-band image in terms of man-made and natural contributions and which describes statistically the spatial properties of each. The form of the minimum mean-square error linear filter for estimation of the natural component of the scene is derived and a suboptimal filter is implemented. Nonstationarity of the two-dimensional random processes contained in the model requires a unique technique for deriving the optimum filter. Finally, the filter depends on knowledge of field boundaries. An algorithm for boundary location is proposed, discussed, and implemented.
Eash, David A.; Barnes, Kimberlee K.
2017-01-01
A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic characteristics, landform regions, and soil regions. A comparison of root mean square errors and average standard errors of prediction for the statewide, regional, and region-of-influence regressions determined that the regional regression provided the best estimates of the seven selected statistics at ungaged sites in Iowa. Because a significant number of streams in Iowa reach zero flow as their minimum flow during low-flow years, four different types of regression analyses were used: left-censored, logistic, generalized-least-squares, and weighted-least-squares regression. A total of 192 streamgages were included in the development of 27 regression equations for the three low-flow regions. For the northeast and northwest regions, a censoring threshold was used to develop 12 left-censored regression equations to estimate the 6 low-flow frequency statistics for each region. For the southern region a total of 12 regression equations were developed; 6 logistic regression equations were developed to estimate the probability of zero flow for the 6 low-flow frequency statistics and 6 generalized least-squares regression equations were developed to estimate the 6 low-flow frequency statistics, if nonzero flow is estimated first by use of the logistic equations. A weighted-least-squares regression equation was developed for each region to estimate the harmonic-mean-flow statistic. Average standard errors of estimate for the left-censored equations for the northeast region range from 64.7 to 88.1 percent and for the northwest region range from 85.8 to 111.8 percent. Misclassification percentages for the logistic equations for the southern region range from 5.6 to 14.0 percent. Average standard errors of prediction for generalized least-squares equations for the southern region range from 71.7 to 98.9 percent and pseudo coefficients of determination for the generalized-least-squares equations range from 87.7 to 91.8 percent. Average standard errors of prediction for weighted-least-squares equations developed for estimating the harmonic-mean-flow statistic for each of the three regions range from 66.4 to 80.4 percent. The regression equations are applicable only to stream sites in Iowa with low flows not significantly affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. If the equations are used at ungaged sites on regulated streams, or on streams affected by water-supply and agricultural withdrawals, then the estimates will need to be adjusted by the amount of regulation or withdrawal to estimate the actual flow conditions if that is of interest. Caution is advised when applying the equations for basins with characteristics near the applicable limits of the equations and for basins located in karst topography. A test of two drainage-area ratio methods using 31 pairs of streamgages, for the annual 7-day mean low-flow statistic for a recurrence interval of 10 years, indicates a weighted drainage-area ratio method provides better estimates than regional regression equations for an ungaged site on a gaged stream in Iowa when the drainage-area ratio is between 0.5 and 1.4. These regression equations will be implemented within the U.S. Geological Survey StreamStats web-based geographic-information-system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the seven selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these seven selected statistics are provided for the streamgage.
Albustanji, Yusuf M; Albustanji, Mahmoud M; Hegazi, Mohamed M; Amayreh, Mousa M
2014-10-01
The purpose of this study was to assess prevalence and types of consonant production errors and phonological processes in Saudi Arabic-speaking children with repaired cleft lip and palate, and to determine the relationship between frequency of errors on one hand and the type of the cleft. Possible relationship between age, gender and frequency of errors was also investigated. Eighty Saudi children with repaired cleft lip and palate aged 6-15 years (mean 6.7 years), underwent speech, language, and hearing evaluation. The diagnosis of articulation deficits was based on the results of an Arabic articulation test. Phonological processes were reported based on the productivity scale of a minimum 20% of occurrence. Diagnosis of nasality was based on a 5-point scale that reflects severity from 0 through 4. All participants underwent intraoral examination, informal language assessment, and hearing evaluation to assess their speech and language abilities. The Chi-Square test for independence was used to analyze the results of consonant production as a function of type of CLP and age. Out of 80 participants with CLP, 21 participants had normal articulation and resonance, 59 of participants (74%) showed speech abnormalities. Twenty-one of these 59 participants showed only articulation errors; 17 showed only hypernasality; and 21 showed both articulation and resonance deficits. CAs were observed in 20 participant. The productive phonological processes were consonant backing, final consonant deletion, gliding, and stopping. At age 6 and older, 37% of participants had persisting hearing loss. Despite early age at time of surgery (mean 6.7 months) for the studied CLP participants in this study, a substantial number of them demonstrated articulation errors and hypernasality. The results showed desirable findings for diverse languages. It is especially interesting to consider the prevalence of glottal stops and pharyngeal fricatives in a population for whom these sound are phonemic. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Karlsson, Kristin; Lax, Ingmar; Lindbäck, Elias; Poludniowski, Gavin
2017-09-01
Geometrical uncertainties can result in a delivered dose to the tumor different from that estimated in the static treatment plan. The purpose of this project was to investigate the accuracy of the dose calculated to the clinical target volume (CTV) with the dose-shift approximation, in stereotactic body radiation therapy (SBRT) of lung tumors considering setup errors and breathing motion. The dose-shift method was compared with a beam-shift method with dose recalculation. Included were 10 patients (10 tumors) selected to represent a variety of SBRT-treated lung tumors in terms of tumor location, CTV volume, and tumor density. An in-house developed toolkit within a treatment planning system allowed the shift of either the dose matrix or a shift of the beam isocenter with dose recalculation, to simulate setup errors and breathing motion. Setup shifts of different magnitudes (up to 10 mm) and directions as well as breathing with different peak-to-peak amplitudes (up to 10:5:5 mm) were modeled. The resulting dose-volume histograms (DVHs) were recorded and dose statistics were extracted. Generally, both the dose-shift and beam-shift methods resulted in calculated doses lower than the static planned dose, although the minimum (D 98% ) dose exceeded the prescribed dose in all cases, for setup shifts up to 5 mm. The dose-shift method also generally underestimated the dose compared with the beam-shift method. For clinically realistic systematic displacements of less than 5 mm, the results demonstrated that in the minimum dose region within the CTV, the dose-shift method was accurate to 2% (root-mean-square error). Breathing motion only marginally degraded the dose distributions. Averaged over the patients and shift directions, the dose-shift approximation was determined to be accurate to approximately 2% (RMS) within the CTV, for clinically relevant geometrical uncertainties for SBRT of lung tumors.
Program documentation: Surface heating rate of thin skin models (THNSKN)
NASA Technical Reports Server (NTRS)
Mcbryde, J. D.
1975-01-01
Program THNSKN computes the mean heating rate at a maximum of 100 locations on the surface of thin skin transient heating rate models. Output is printed in tabular form and consists of time history tabulation of temperatures, average temperatures, heat loss without conduction correction, mean heating rate, least squares heating rate, and the percent standard error of the least squares heating rates. The input tape used is produced by the program EHTS03.
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
Online measurement of urea concentration in spent dialysate during hemodialysis.
Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J
2004-01-01
We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume. PMID:22203886
Chen, Chieh-Fan; Ho, Wen-Hsien; Chou, Huei-Yin; Yang, Shu-Mei; Chen, I-Te; Shi, Hon-Yi
2011-01-01
This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA) model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirchhoff, William H.
2012-09-15
The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less
Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy
NASA Astrophysics Data System (ADS)
Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun
This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.
Study of the convergence behavior of the complex kernel least mean square algorithm.
Paul, Thomas K; Ogunfunmi, Tokunbo
2013-09-01
The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.
A study of image quality for radar image processing. [synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.
1982-01-01
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.
Measuring Dispersion Effects of Factors in Factorial Experiments.
1988-01-01
error is MSE =i=l j=1 i n r (SSE/(N-p)), the sum of squares of pure error is SSPE = Z E Y i=1 j=1 and the mean square of pure error is MSPE - ( SSPE /n...the level of the factor in the ith run is 0. 3.1. First Measure We have n r n r SSPE = 1 Is it -yi) 2 + E r (1-8 )(yjj li-l j=l (iYjj +i= j=l l - i...The first component in SSPE corresponds to level I of the factor and has n degrees of freedom ( E 6i)(r-I). The second component corresponds to i=l n
A novel beamformer design method for medical ultrasound. Part I: Theory.
Ranganathan, Karthik; Walker, William F
2003-01-01
The design of transmit and receive aperture weightings is a critical step in the development of ultrasound imaging systems. Current design methods are generally iterative, and consequently time consuming and inexact. We describe a new and general ultrasound beamformer design method, the minimum sum squared error (MSSE) technique. The MSSE technique enables aperture design for arbitrary beam patterns (within fundamental limitations imposed by diffraction). It uses a linear algebra formulation to describe the system point spread function (psf) as a function of the aperture weightings. The sum squared error (SSE) between the system psf and the desired or goal psf is minimized, yielding the optimal aperture weightings. We present detailed analysis for continuous wave (CW) and broadband systems. We also discuss several possible applications of the technique, such as the design of aperture weightings that improve the system depth of field, generate limited diffraction transmit beams, and improve the correlation depth of field in translated aperture system geometries. Simulation results are presented in an accompanying paper.
NASA Astrophysics Data System (ADS)
Schneider, P.; Roberts, D. A.
2007-12-01
The Fire Potential Index (FPI) is currently the only operationally used wildfire susceptibility index in the United States that incorporates remote sensing data in addition to meteorological information. Its remote sensing component utilizes relative greenness derived from a NDVI time series as a proxy for computing the ratio of live to dead vegetation. This study investigates the potential of Multiple Endmember Spectral Mixture Analysis (MESMA) as a more direct and physically reasonable way of computing the live ratio and applying it for the computation of the FPI. A time series of 16-day reflectance composites of Moderate Resolution Imaging Spectroradiometer (MODIS) data was used to perform the analysis. Endmember selection for green vegetation (GV), non- photosynthetic vegetation (NPV) and soil was performed in two stages. First, a subset of suitable endmembers was selected from an extensive library of reference and image spectra for each class using Endmember Average Root Mean Square Error (EAR), Minimum Average Spectral Angle (MASA) and a count-based technique. Second, the most appropriate endmembers for the specific data set were selected from the subset by running a series of 2-endmember models on representative images and choosing the ones that modeled the majority of pixels. The final set of endmembers was used for running MESMA on southern California MODIS composites from 2000 to 2006. 3- and 4-endmember models were considered. The best model was chosen on a per-pixel basis according to the minimum root mean square error of the models at each level of complexity. Endmember fractions were normalized by the shade endmember to generate realistic fractions of GV and NPV. In order to validate the MESMA-derived GV fractions they were compared against live ratio estimates from RG. A significant spatial and temporal relationship between both measures was found, indicating that GV fraction has the potential to substitute RG in computing the FPI. To further test this hypothesis the live ratio estimates obtained from MESMA were used to compute daily FPI maps for southern California from 2001 to 2006. A validation with historical wildfire data from the MODIS Active Fire product was carried out over the same time period using logistic regression. Initial results show that MESMA-derived GV fraction can be used successfully for generating FPI maps of southern California.
Low complexity adaptive equalizers for underwater acoustic communications
NASA Astrophysics Data System (ADS)
Soflaei, Masoumeh; Azmi, Paeiz
2014-08-01
Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.
Akdenur, B; Okkesum, S; Kara, S; Günes, S
2009-11-01
In this study, electromyography signals sampled from children undergoing orthodontic treatment were used to estimate the effect of an orthodontic trainer on the anterior temporal muscle. A novel data normalization method, called the correlation- and covariance-supported normalization method (CCSNM), based on correlation and covariance between features in a data set, is proposed to provide predictive guidance to the orthodontic technique. The method was tested in two stages: first, data normalization using the CCSNM; second, prediction of normalized values of anterior temporal muscles using an artificial neural network (ANN) with a Levenberg-Marquardt learning algorithm. The data set consists of electromyography signals from right anterior temporal muscles, recorded from 20 children aged 8-13 years with class II malocclusion. The signals were recorded at the start and end of a 6-month treatment. In order to train and test the ANN, two-fold cross-validation was used. The CCSNM was compared with four normalization methods: minimum-maximum normalization, z score, decimal scaling, and line base normalization. In order to demonstrate the performance of the proposed method, prevalent performance-measuring methods, and the mean square error and mean absolute error as mathematical methods, the statistical relation factor R2 and the average deviation have been examined. The results show that the CCSNM was the best normalization method among other normalization methods for estimating the effect of the trainer.
DWI filtering using joint information for DTI and HARDI.
Tristán-Vega, Antonio; Aja-Fernández, Santiago
2010-04-01
The filtering of the Diffusion Weighted Images (DWI) prior to the estimation of the diffusion tensor or other fiber Orientation Distribution Functions (ODF) has been proved to be of paramount importance in the recent literature. More precisely, it has been evidenced that the estimation of the diffusion tensor without a previous filtering stage induces errors which cannot be recovered by further regularization of the tensor field. A number of approaches have been intended to overcome this problem, most of them based on the restoration of each DWI gradient image separately. In this paper we propose a methodology to take advantage of the joint information in the DWI volumes, i.e., the sum of the information given by all DWI channels plus the correlations between them. This way, all the gradient images are filtered together exploiting the first and second order information they share. We adapt this methodology to two filters, namely the Linear Minimum Mean Squared Error (LMMSE) and the Unbiased Non-Local Means (UNLM). These new filters are tested over a wide variety of synthetic and real data showing the convenience of the new approach, especially for High Angular Resolution Diffusion Imaging (HARDI). Among the techniques presented, the joint LMMSE is proved a very attractive approach, since it shows an accuracy similar to UNLM (or even better in some situations) with a much lighter computational load. Copyright 2009 Elsevier B.V. All rights reserved.
Temperature-based estimation of global solar radiation using soft computing methodologies
NASA Astrophysics Data System (ADS)
Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak
2016-07-01
Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.
Four Types of Pulse Oximeters Accurately Detect Hypoxia during Low Perfusion and Motion.
Louie, Aaron; Feiner, John R; Bickler, Philip E; Rhodes, Laura; Bernstein, Michael; Lucero, Jennifer
2018-03-01
Pulse oximeter performance is degraded by motion artifacts and low perfusion. Manufacturers developed algorithms to improve instrument performance during these challenges. There have been no independent comparisons of these devices. We evaluated the performance of four pulse oximeters (Masimo Radical-7, USA; Nihon Kohden OxyPal Neo, Japan; Nellcor N-600, USA; and Philips Intellivue MP5, USA) in 10 healthy adult volunteers. Three motions were evaluated: tapping, pseudorandom, and volunteer-generated rubbing, adjusted to produce photoplethsmogram disturbance similar to arterial pulsation amplitude. During motion, inspired gases were adjusted to achieve stable target plateaus of arterial oxygen saturation (SaO2) at 75%, 88%, and 100%. Pulse oximeter readings were compared with simultaneous arterial blood samples to calculate bias (oxygen saturation measured by pulse oximetry [SpO2] - SaO2), mean, SD, 95% limits of agreement, and root mean square error. Receiver operating characteristic curves were determined to detect mild (SaO2 < 90%) and severe (SaO2 < 80%) hypoxemia. Pulse oximeter readings corresponding to 190 blood samples were analyzed. All oximeters detected hypoxia but motion and low perfusion degraded performance. Three of four oximeters (Masimo, Nellcor, and Philips) had root mean square error greater than 3% for SaO2 70 to 100% during any motion, compared to a root mean square error of 1.8% for the stationary control. A low perfusion index increased error. All oximeters detected hypoxemia during motion and low-perfusion conditions, but motion impaired performance at all ranges, with less accuracy at lower SaO2. Lower perfusion degraded performance in all but the Nihon Kohden instrument. We conclude that different types of pulse oximeters can be similarly effective in preserving sensitivity to clinically relevant hypoxia.
Circulation patterns in the deep Subtropical Northeast Atlantic with ARGO data
NASA Astrophysics Data System (ADS)
Calheiros, Tomas; Bashmachnikov, Igor
2014-05-01
In this work we study the dominant circulation patterns in the Subtropical Northeast Atlantic using ARGO data [25-45o N, 5-35o W]. The data were obtained from the Coriolis operational data center (ftp://ftp.ifremer.fr) for the years 1999-2013. During this period of time in the study there were available area 376 floats with 15062 float-months of total time. The floats were launched in the depths range between 300 and 2000 m, but most of the floats were concentrated at 1000 m (2000 float-months) and 1500 m (3400 float-months). In the upper 400-m layer there were also about 1000 float-months, but their number and distribution did not allow analysis of the mean currents over the study region. For each float position Lagrangian current velocity was computed as the difference between the position when the buoy started sinking to the reference depth and the consequent position of surfacing of the float, divided by the respective time interval. This allowed reducing the noise related with sea-surface drift of the buoys during the data-transmission periods. Mean Eulerian velocity and its error were computed in each of the 2ox2o square. Whenever in a 2ox2o square more than 150 observations of the Lagrangian velocity were available, the square was split into 4 smaller 1ox1o squares, in each of which the mean Eulerian velocities and their errors were estimated. Eulerian currents at 1000 m, as well as at 1500 m depth formed an overall anticyclonic circulation pattern in the study region. The modal velocity of all buoys at 1000 m level was 4 cm/s with an error of the mean of 1.8 cm/s. The modal velocity of all buoys at 1500m was 3 cm/s with an error of the mean of 1.4 cm/s. The southwestward flows near the Madeira Island and further westwards flow along the zonal band of 25-30o N at 1500 m depth well corresponded to the extension of the deep fraction of the Mediterranean Water salt tong.
A new enhanced index tracking model in portfolio optimization with sum weighted approach
NASA Astrophysics Data System (ADS)
Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng
2017-04-01
Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.
Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s
NASA Astrophysics Data System (ADS)
Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.
1982-03-01
This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.
NASA Astrophysics Data System (ADS)
Takeda, Kazuaki; Kojima, Yohei; Adachi, Fumiyuki
Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide a better bit error rate (BER) performance than rake combining. However, the residual inter-chip interference (ICI) is produced after MMSE-FDE and this degrades the BER performance. Recently, we showed that frequency-domain ICI cancellation can bring the BER performance close to the theoretical lower bound. To further improve the BER performance, transmit antenna diversity technique is effective. Cyclic delay transmit diversity (CDTD) can increase the number of equivalent paths and hence achieve a large frequency diversity gain. Space-time transmit diversity (STTD) can obtain antenna diversity gain due to the space-time coding and achieve a better BER performance than CDTD. Objective of this paper is to show that the BER performance degradation of CDTD is mainly due to the residual ICI and that the introduction of ICI cancellation gives almost the same BER performance as STTD. This study provides a very important result that CDTD has a great advantage of providing a higher throughput than STTD. This is confirmed by computer simulation. The computer simulation results show that CDTD can achieve higher throughput than STTD when ICI cancellation is introduced.
Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel.
Selvaprabhu, Poongundran; Chinnadurai, Sunil; Li, Jun; Lee, Moon Ho
2017-08-17
In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K -user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes.
Topological Interference Management for K-User Downlink Massive MIMO Relay Network Channel
Li, Jun; Lee, Moon Ho
2017-01-01
In this paper, we study the emergence of topological interference alignment and the characterizing features of a multi-user broadcast interference relay channel. We propose an alternative transmission strategy named the relay space-time interference alignment (R-STIA) technique, in which a K-user multiple-input-multiple-output (MIMO) interference channel has massive antennas at the transmitter and relay. Severe interference from unknown transmitters affects the downlink relay network channel and degrades the system performance. An additional (unintended) receiver is introduced in the proposed R-STIA technique to overcome the above problem, since it has the ability to decode the desired signals for the intended receiver by considering cooperation between the receivers. The additional receiver also helps in recovering and reconstructing the interference signals with limited channel state information at the relay (CSIR). The Alamouti space-time transmission technique and minimum mean square error (MMSE) linear precoder are also used in the proposed scheme to detect the presence of interference signals. Numerical results show that the proposed R-STIA technique achieves a better performance in terms of the bit error rate (BER) and sum-rate compared to the existing broadcast channel schemes. PMID:28817071
Generating DEM from LIDAR data - comparison of available software tools
NASA Astrophysics Data System (ADS)
Korzeniowska, K.; Lacka, M.
2011-12-01
In recent years many software tools and applications have appeared that offer procedures, scripts and algorithms to process and visualize ALS data. This variety of software tools and of "point cloud" processing methods contributed to the aim of this study: to assess algorithms available in various software tools that are used to classify LIDAR "point cloud" data, through a careful examination of Digital Elevation Models (DEMs) generated from LIDAR data on a base of these algorithms. The works focused on the most important available software tools: both commercial and open source ones. Two sites in a mountain area were selected for the study. The area of each site is 0.645 sq km. DEMs generated with analysed software tools ware compared with a reference dataset, generated using manual methods to eliminate non ground points. Surfaces were analysed using raster analysis. Minimum, maximum and mean differences between reference DEM and DEMs generated with analysed software tools were calculated, together with Root Mean Square Error. Differences between DEMs were also examined visually using transects along the grid axes in the test sites.
2009-07-16
0.25 0.26 -0.85 1 SSR SSE R SSTO SSTO = = − 2 2 ˆ( ) : Regression sum of square, ˆwhere : mean value, : value from the fitted line ˆ...Error sum of square : Total sum of square i i i i SSR Y Y Y Y SSE Y Y SSTO SSE SSR = − = − = + ∑ ∑ Statistical analysis: Coefficient of correlation
MMSE Estimator for Children’s Speech with Car and Weather Noise
NASA Astrophysics Data System (ADS)
Sayuthi, V.
2018-04-01
Previous research mentioned that most people need and use vehicles for various purposes, in this recent time and future, as a means of traveling. Many ways can be done in a vehicle, such as for enjoying entertainment, and doing work, so vehicles not just only as a means of traveling. In this study, we will examine the children’s speech from a girl in the vehicle that affected by noise disturbances from the sound source of car noise and the weather sound noise around it, in this case, the rainy weather noise. Vehicle sounds may be from car engine or car air conditioner. The minimum mean square error (MMSE) estimator is used as an attempt to obtain or detect the children’s clear speech by representing simulation research as random process signal that factored by the autocorrelation of both the child’s voice and the disturbance noise signal. This MMSE estimator can be considered as wiener filter as the clear sound are reconstructed again. We expected that the results of this study can help as the basis for development of entertainment or communication technology for passengers of vehicles in the future, particularly using MMSE estimators.
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
NASA Astrophysics Data System (ADS)
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Accuracy of a pulse-coherent acoustic Doppler profiler in a wave-dominated flow
Lacy, J.R.; Sherwood, C.R.
2004-01-01
The accuracy of velocities measured by a pulse-coherent acoustic Doppler profiler (PCADP) in the bottom boundary layer of a wave-dominated inner-shelf environment is evaluated. The downward-looking PCADP measured velocities in eight 10-cm cells at 1 Hz. Velocities measured by the PCADP are compared to those measured by an acoustic Doppler velocimeter for wave orbital velocities up to 95 cm s-1 and currents up to 40 cm s-1. An algorithm for correcting ambiguity errors using the resolution velocities was developed. Instrument bias, measured as the average error in burst mean speed, is -0.4 cm s-1 (standard deviation = 0.8). The accuracy (root-mean-square error) of instantaneous velocities has a mean of 8.6 cm s-1 (standard deviation = 6.5) for eastward velocities (the predominant direction of waves), 6.5 cm s-1 (standard deviation = 4.4) for northward velocities, and 2.4 cm s-1 (standard deviation = 1.6) for vertical velocities. Both burst mean and root-mean-square errors are greater for bursts with ub ??? 50 cm s-1. Profiles of burst mean speeds from the bottom five cells were fit to logarithmic curves: 92% of bursts with mean speed ??? 5 cm s-1 have a correlation coefficient R2 > 0.96. In cells close to the transducer, instantaneous velocities are noisy, burst mean velocities are biased low, and bottom orbital velocities are biased high. With adequate blanking distances for both the profile and resolution velocities, the PCADP provides sufficient accuracy to measure velocities in the bottom boundary layer under moderately energetic inner-shelf conditions.
Fitting a function to time-dependent ensemble averaged data.
Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias
2018-05-03
Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.
Modeling of surface dust concentrations using neural networks and kriging
NASA Astrophysics Data System (ADS)
Buevich, Alexander G.; Medvedev, Alexander N.; Sergeev, Alexander P.; Tarasov, Dmitry A.; Shichkin, Andrey V.; Sergeeva, Marina V.; Atanasova, T. B.
2016-12-01
Creating models which are able to accurately predict the distribution of pollutants based on a limited set of input data is an important task in environmental studies. In the paper two neural approaches: (multilayer perceptron (MLP)) and generalized regression neural network (GRNN)), and two geostatistical approaches: (kriging and cokriging), are using for modeling and forecasting of dust concentrations in snow cover. The area of study is under the influence of dust emissions from a copper quarry and a several industrial companies. The comparison of two mentioned approaches is conducted. Three indices are used as the indicators of the models accuracy: the mean absolute error (MAE), root mean square error (RMSE) and relative root mean square error (RRMSE). Models based on artificial neural networks (ANN) have shown better accuracy. When considering all indices, the most precision model was the GRNN, which uses as input parameters for modeling the coordinates of sampling points and the distance to the probable emissions source. The results of work confirm that trained ANN may be more suitable tool for modeling of dust concentrations in snow cover.
40 CFR 65.84 - Operating requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-tight means that the pressure in a truck or railcar tank will not drop more than 750 pascals (0.11 pound per square inch) within 5 minutes after it is pressurized to a minimum of 4,500 pascals (0.65 pound...
NASA Astrophysics Data System (ADS)
Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.
2014-12-01
This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.
Quantitative Modelling of Trace Elements in Hard Coal.
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.
Quantitative Modelling of Trace Elements in Hard Coal
Smoliński, Adam; Howaniec, Natalia
2016-01-01
The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
Lankford, Christopher L; Does, Mark D
2018-02-01
Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Medium-range Performance of the Global NWP Model
NASA Astrophysics Data System (ADS)
Kim, J.; Jang, T.; Kim, J.; Kim, Y.
2017-12-01
The medium-range performance of the global numerical weather prediction (NWP) model in the Korea Meteorological Administration (KMA) is investigated. The performance is based on the prediction of the extratropical circulation. The mean square error is expressed by sum of spatial variance of discrepancy between forecasts and observations and the square of the mean error (ME). Thus, it is important to investigate the ME effect in order to understand the model performance. The ME is expressed by the subtraction of an anomaly from forecast difference against the real climatology. It is found that the global model suffers from a severe systematic ME in medium-range forecasts. The systematic ME is dominant in the entire troposphere in all months. Such ME can explain at most 25% of root mean square error. We also compare the extratropical ME distribution with that from other NWP centers. NWP models exhibit similar spatial ME structure each other. It is found that the spatial ME pattern is highly correlated to that of an anomaly, implying that the ME varies with seasons. For example, the correlation coefficient between ME and anomaly ranges from -0.51 to -0.85 by months. The pattern of the extratropical circulation also has a high correlation to an anomaly. The global model has trouble in faithfully simulating extratropical cyclones and blockings in the medium-range forecast. In particular, the model has a hard to simulate an anomalous event in medium-range forecasts. If we choose an anomalous period for a test-bed experiment, we will suffer from a large error due to an anomaly.
Tan, Jin; Li, Rong; Jiang, Zi-Tao; Tang, Shu-Hua; Wang, Ying; Shi, Meng; Xiao, Yi-Qian; Jia, Bin; Lu, Tian-Xiang; Wang, Hao
2017-02-15
Synchronous front-face fluorescence spectroscopy has been developed for the discrimination of used frying oil (UFO) from edible vegetable oil (EVO), the estimation of the using time of UFO, and the determination of the adulteration of EVO with UFO. Both the heating time of laboratory prepared UFO and the adulteration of EVO with UFO could be determined by partial least squares regression (PLSR). To simulate the EVO adulteration with UFO, for each kind of oil, fifty adulterated samples at the adulterant amounts range of 1-50% were prepared. PLSR was then adopted to build the model and both full (leave-one-out) cross-validation and external validation were performed to evaluate the predictive ability. Under the optimum condition, the plots of observed versus predicted values exhibited high linearity (R(2)>0.96). The root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP) were both lower than 3%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kuriakose, Saji; Joe, I Hubert
2013-11-01
Determination of the authenticity of essential oils has become more significant, in recent years, following some illegal adulteration and contamination scandals. The present investigative study focuses on the application of near infrared spectroscopy to detect sample authenticity and quantify economic adulteration of sandalwood oils. Several data pre-treatments are investigated for calibration and prediction using partial least square regression (PLSR). The quantitative data analysis is done using a new spectral approach - full spectrum or sequential spectrum. The optimum number of PLS components is obtained according to the lowest root mean square error of calibration (RMSEC=0.00009% v/v). The lowest root mean square error of prediction (RMSEP=0.00016% v/v) in the test set and the highest coefficient of determination (R(2)=0.99989) are used as the evaluation tools for the best model. A nonlinear method, locally weighted regression (LWR), is added to extract nonlinear information and to compare with the linear PLSR model. Copyright © 2013 Elsevier B.V. All rights reserved.
Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho
2018-07-15
Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuriakose, Saji; Joe, I. Hubert
2013-11-01
Determination of the authenticity of essential oils has become more significant, in recent years, following some illegal adulteration and contamination scandals. The present investigative study focuses on the application of near infrared spectroscopy to detect sample authenticity and quantify economic adulteration of sandalwood oils. Several data pre-treatments are investigated for calibration and prediction using partial least square regression (PLSR). The quantitative data analysis is done using a new spectral approach - full spectrum or sequential spectrum. The optimum number of PLS components is obtained according to the lowest root mean square error of calibration (RMSEC = 0.00009% v/v). The lowest root mean square error of prediction (RMSEP = 0.00016% v/v) in the test set and the highest coefficient of determination (R2 = 0.99989) are used as the evaluation tools for the best model. A nonlinear method, locally weighted regression (LWR), is added to extract nonlinear information and to compare with the linear PLSR model.
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel
2011-01-01
Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
Yang, Jie; Liu, Qingquan; Dai, Wei
2017-02-01
To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.
Precise calibration of spatial phase response nonuniformity arising in liquid crystal on silicon.
Xu, Jingquan; Qin, SiYi; Liu, Chen; Fu, Songnian; Liu, Deming
2018-06-15
In order to calibrate the spatial phase response nonuniformity of liquid crystal on silicon (LCoS), we propose to use a Twyman-Green interferometer to characterize the wavefront distortion, due to the inherent curvature of the device. During the characterization, both the residual carrier frequency introduced by the Fourier transform evaluation method and the lens aberration are error sources. For the tilted phase error introduced by residual carrier frequency, the least mean square fitting method is used to obtain the tilted phase error. Meanwhile, we use Zernike polynomials fitting based on plane mirror calibration to mitigate the lens aberration. For a typical LCoS with 1×12,288 pixels after calibration, the peak-to-valley value of the inherent wavefront distortion is approximately 0.25λ at 1550 nm, leading to a half-suppression of wavefront distortion. All efforts can suppress the root mean squares value of the inherent wavefront distortion to approximately λ/34.
NASA Astrophysics Data System (ADS)
Sun, Li-wei; Ye, Xin; Fang, Wei; He, Zhen-lei; Yi, Xiao-long; Wang, Yu-peng
2017-11-01
Hyper-spectral imaging spectrometer has high spatial and spectral resolution. Its radiometric calibration needs the knowledge of the sources used with high spectral resolution. In order to satisfy the requirement of source, an on-orbit radiometric calibration method is designed in this paper. This chain is based on the spectral inversion accuracy of the calibration light source. We compile the genetic algorithm progress which is used to optimize the channel design of the transfer radiometer and consider the degradation of the halogen lamp, thus realizing the high accuracy inversion of spectral curve in the whole working time. The experimental results show the average root mean squared error is 0.396%, the maximum root mean squared error is 0.448%, and the relative errors at all wavelengths are within 1% in the spectral range from 500 nm to 900 nm during 100 h operating time. The design lays a foundation for the high accuracy calibration of imaging spectrometer.
Research on the infiltration processes of lawn soils of the Babao River in the Qilian Mountain.
Li, GuangWen; Feng, Qi; Zhang, FuPing; Cheng, AiFang
2014-01-01
Using a Guelph Permeameter, the soil water infiltration processes were analyzed in the Babao River of the Qilian Mountain in China. The results showed that the average soil initial infiltration and the steady infiltration rates in the upstream reaches of the Babao River are 1.93 and 0.99 cm/min, whereas those of the middle area are 0.48 cm/min and 0.21 cm/min, respectively. The infiltration processes can be divided into three stages: the rapidly changing stage (0-10 min), the slowly changing stage (10-30 min) and the stabilization stage (after 30 min). We used field data collected from lawn soils and evaluated the performances of the infiltration models of Philip, Kostiakov and Horton with the sum of squared error, the root mean square error, the coefficient of determination, the mean error, the model efficiency and Willmott's index of agreement. The results indicated that the Kostiakov model was most suitable for studying the infiltration process in the alpine lawn soils.
Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
Zhao, Haiquan; Zhang, Jiashu
2009-04-01
This paper proposes a novel computational efficient adaptive nonlinear equalizer based on combination of finite impulse response (FIR) filter and functional link artificial neural network (CFFLANN) to compensate linear and nonlinear distortions in nonlinear communication channel. This convex nonlinear combination results in improving the speed while retaining the lower steady-state error. In addition, since the CFFLANN needs not the hidden layers, which exist in conventional neural-network-based equalizers, it exhibits a simpler structure than the traditional neural networks (NNs) and can require less computational burden during the training mode. Moreover, appropriate adaptation algorithm for the proposed equalizer is derived by the modified least mean square (MLMS). Results obtained from the simulations clearly show that the proposed equalizer using the MLMS algorithm can availably eliminate various intensity linear and nonlinear distortions, and be provided with better anti-jamming performance. Furthermore, comparisons of the mean squared error (MSE), the bit error rate (BER), and the effect of eigenvalue ratio (EVR) of input correlation matrix are presented.
Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach
NASA Astrophysics Data System (ADS)
Bähr, Hermann; Hanssen, Ramon F.
2012-12-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...
2018-01-31
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Family caregiver adjustment and stroke survivor impairment: A path analytic model.
Pendergrass, Anna; Hautzinger, Martin; Elliott, Timothy R; Schilling, Oliver; Becker, Clemens; Pfeiffer, Klaus
2017-05-01
Depressive symptoms are a common problem among family caregivers of stroke survivors. The purpose of this study was to examine the association between care recipient's impairment and caregiver depression, and determine the possible mediating effects of caregiver negative problem-orientation, mastery, and leisure time satisfaction. The evaluated model was derived from Pearlin's stress process model of caregiver adjustment. We analyzed baseline data from 122 strained family members who were assisting stroke survivors in Germany for a minimum of 6 months and who consented to participate in a randomized clinical trial. Depressive symptoms were measured with the Center for Epidemiological Studies Depression Scale. The cross-sectional data were analyzed using path analysis. The results show an adequate fit of the model to the data, χ2(1, N = 122) = 0.17, p = .68; comparative fit index = 1.00; root mean square error of approximation: p < .01; standardized root mean square residual = 0.01. The model explained 49% of the variance in the caregiver depressive symptoms. Results indicate that caregivers at risk for depression reported a negative problem orientation, low caregiving mastery, and low leisure time satisfaction. The situation is particularly affected by the frequency of stroke survivor problematic behavior, and by the degree of their impairments in activities of daily living. The findings provide empirical support for the Pearlin's stress model and emphasize how important it is to target these mediators in health promotion interventions for family caregivers of stroke survivors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Effect of Receiver Choosing on Point Positions Determination in Network RTK
NASA Astrophysics Data System (ADS)
Bulbul, Sercan; Inal, Cevat
2016-04-01
Nowadays, the developments in GNSS technique allow to determinate point positioning in real time. Initially, point positioning was determined by RTK (Real Time Kinematic) based on a reference station. But, to avoid systematic errors in this method, distance between the reference points and rover receiver must be shorter than10 km. To overcome this restriction in RTK method, the idea of setting more than one reference point had been suggested and, CORS (Continuously Operations Reference Systems) was put into practice. Today, countries like ABD, Germany, Japan etc. have set CORS network. CORS-TR network which has 146 reference points has also been established in 2009 in Turkey. In CORS-TR network, active CORS approach was adopted. In Turkey, CORS-TR reference stations covering whole country are interconnected and, the positions of these stations and atmospheric corrections are continuously calculated. In this study, in a selected point, RTK measurements based on CORS-TR, were made with different receivers (JAVAD TRIUMPH-1, TOPCON Hiper V, MAGELLAN PRoMark 500, PENTAX SMT888-3G, SATLAB SL-600) and with different correction techniques (VRS, FKP, MAC). In the measurements, epoch interval was taken as 5 seconds and measurement time as 1 hour. According to each receiver and each correction technique, means and differences between maximum and minimum values of measured coordinates, root mean squares in the directions of coordinate axis and 2D and 3D positioning precisions were calculated, the results were evaluated by statistical methods and the obtained graphics were interpreted. After evaluation of the measurements and calculations, for each receiver and each correction technique; the coordinate differences between maximum and minimum values were measured to be less than 8 cm, root mean squares in coordinate axis directions less than ±1.5 cm, 2D point positioning precisions less than ±1.5 cm and 3D point positioning precisions less than ±1.5 cm. In the measurement point, it has been concluded that VRS correction technique is generally better than other corrections techniques.
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
Liu, Xue-song; Sun, Fen-fang; Jin, Ye; Wu, Yong-jiang; Gu, Zhi-xin; Zhu, Li; Yan, Dong-lan
2015-12-01
A novel method was developed for the rapid determination of multi-indicators in corni fructus by means of near infrared (NIR) spectroscopy. Particle swarm optimization (PSO) based least squares support vector machine was investigated to increase the levels of quality control. The calibration models of moisture, extractum, morroniside and loganin were established using the PSO-LS-SVM algorithm. The performance of PSO-LS-SVM models was compared with partial least squares regression (PLSR) and back propagation artificial neural network (BP-ANN). The calibration and validation results of PSO-LS-SVM were superior to both PLS and BP-ANN. For PSO-LS-SVM models, the correlation coefficients (r) of calibrations were all above 0.942. The optimal prediction results were also achieved by PSO-LS-SVM models with the RMSEP (root mean square error of prediction) and RSEP (relative standard errors of prediction) less than 1.176 and 15.5% respectively. The results suggest that PSO-LS-SVM algorithm has a good model performance and high prediction accuracy. NIR has a potential value for rapid determination of multi-indicators in Corni Fructus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yunlong; Wang, Aiping; Guo, Lei
This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.
Error-Based Design Space Windowing
NASA Technical Reports Server (NTRS)
Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman
2002-01-01
Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.
NASA Astrophysics Data System (ADS)
Gao, Qian
For both the conventional radio frequency and the comparably recent optical wireless communication systems, extensive effort from the academia had been made in improving the network spectrum efficiency and/or reducing the error rate. To achieve these goals, many fundamental challenges such as power efficient constellation design, nonlinear distortion mitigation, channel training design, network scheduling and etc. need to be properly addressed. In this dissertation, novel schemes are proposed accordingly to deal with specific problems falling in category of these challenges. Rigorous proofs and analyses are provided for each of our work to make a fair comparison with the corresponding peer works to clearly demonstrate the advantages. The first part of this dissertation considers a multi-carrier optical wireless system employing intensity modulation (IM) and direct detection (DD). A block-wise constellation design is presented, which treats the DC-bias that conventionally used solely for biasing purpose as an information basis. Our scheme, we term it MSM-JDCM, takes advantage of the compactness of sphere packing in a higher dimensional space, and in turn power efficient constellations are obtained by solving an advanced convex optimization problem. Besides the significant power gains, the MSM-JDCM has many other merits such as being capable of mitigating nonlinear distortion by including a peak-to-power ratio (PAPR) constraint, minimizing inter-symbol-interference (ISI) caused by frequency-selective fading with a novel precoder designed and embedded, and further reducing the bit-error-rate (BER) by combining with an optimized labeling scheme. The second part addresses several optimization problems in a multi-color visible light communication system, including power efficient constellation design, joint pre-equalizer and constellation design, and modeling of different structured channels with cross-talks. Our novel constellation design scheme, termed CSK-Advanced, is compared with the conventional decoupled system with the same spectrum efficiency to demonstrate the power efficiency. Crucial lighting requirements are included as optimization constraints. To control non-linear distortion, the optical peak-to-average-power ratio (PAPR) of LEDs can be individually constrained. With a SVD-based pre-equalizer designed and employed, our scheme can achieve lower BER than counterparts applying zero-forcing (ZF) or linear minimum-mean-squared-error (LMMSE) based post-equalizers. Besides, a binary switching algorithm (BSA) is applied to improve BER performance. The third part looks into a problem of two-phase channel estimation in a relayed wireless network. The channel estimates in every phase are obtained by the linear minimum mean squared error (LMMSE) method. Inaccurate estimate of the relay to destination (RtD) channel in phase 1 could affect estimate of the source to relay (StR) channel in phase 2, which is made erroneous. We first derive a close-form expression for the averaged Bayesian mean-square estimation error (ABMSE) for both phase estimates in terms of the length of source and relay training slots, based on which an iterative searching algorithm is then proposed that optimally allocates training slots to the two phases such that estimation errors are balanced. Analysis shows how the ABMSE of the StD channel estimation varies with the lengths of relay training and source training slots, the relay amplification gain, and the channel prior information respectively. The last part deals with a transmission scheduling problem in a uplink multiple-input-multiple-output (MIMO) wireless network. Code division multiple access (CDMA) is assumed as a multiple access scheme and pseudo-random codes are employed for different users. We consider a heavy traffic scenario, in which each user always has packets to transmit in the scheduled time slots. If the relay is scheduled for transmission together with users, then it operates in a full-duplex mode, where the packets previously collected from users are transmitted to the destination while new packets are being collected from users. A novel expression of throughput is first derived and then used to develop a scheduling algorithm to maximize the throughput. Our full-duplex scheduling is compared with a half-duplex scheduling, random access, and time division multiple access (TDMA), and simulation results illustrate its superiority. Throughput gains due to employment of both MIMO and CDMA are observed.
Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin
2016-01-01
In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers. PACS numbers: 87.55.km, 87.56.Fc PMID:26894358
Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin
2016-01-08
In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
1984-12-01
total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni
Wang, L; Qin, X C; Lin, H C; Deng, K F; Luo, Y W; Sun, Q R; Du, Q X; Wang, Z Y; Tuo, Y; Sun, J H
2018-02-01
To analyse the relationship between Fourier transform infrared (FTIR) spectrum of rat's spleen tissue and postmortem interval (PMI) for PMI estimation using FTIR spectroscopy combined with data mining method. Rats were sacrificed by cervical dislocation, and the cadavers were placed at 20 ℃. The FTIR spectrum data of rats' spleen tissues were taken and measured at different time points. After pretreatment, the data was analysed by data mining method. The absorption peak intensity of rat's spleen tissue spectrum changed with the PMI, while the absorption peak position was unchanged. The results of principal component analysis (PCA) showed that the cumulative contribution rate of the first three principal components was 96%. There was an obvious clustering tendency for the spectrum sample at each time point. The methods of partial least squares discriminant analysis (PLS-DA) and support vector machine classification (SVMC) effectively divided the spectrum samples with different PMI into four categories (0-24 h, 48-72 h, 96-120 h and 144-168 h). The determination coefficient ( R ²) of the PMI estimation model established by PLS regression analysis was 0.96, and the root mean square error of calibration (RMSEC) and root mean square error of cross validation (RMSECV) were 9.90 h and 11.39 h respectively. In prediction set, the R ² was 0.97, and the root mean square error of prediction (RMSEP) was 10.49 h. The FTIR spectrum of the rat's spleen tissue can be effectively analyzed qualitatively and quantitatively by the combination of FTIR spectroscopy and data mining method, and the classification and PLS regression models can be established for PMI estimation. Copyright© by the Editorial Department of Journal of Forensic Medicine.
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
Improving Arterial Spin Labeling by Using Deep Learning.
Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong
2018-05-01
Purpose To develop a deep learning algorithm that generates arterial spin labeling (ASL) perfusion images with higher accuracy and robustness by using a smaller number of subtraction images. Materials and Methods For ASL image generation from pair-wise subtraction, we used a convolutional neural network (CNN) as a deep learning algorithm. The ground truth perfusion images were generated by averaging six or seven pairwise subtraction images acquired with (a) conventional pseudocontinuous arterial spin labeling from seven healthy subjects or (b) Hadamard-encoded pseudocontinuous ASL from 114 patients with various diseases. CNNs were trained to generate perfusion images from a smaller number (two or three) of subtraction images and evaluated by means of cross-validation. CNNs from the patient data sets were also tested on 26 separate stroke data sets. CNNs were compared with the conventional averaging method in terms of mean square error and radiologic score by using a paired t test and/or Wilcoxon signed-rank test. Results Mean square errors were approximately 40% lower than those of the conventional averaging method for the cross-validation with the healthy subjects and patients and the separate test with the patients who had experienced a stroke (P < .001). Region-of-interest analysis in stroke regions showed that cerebral blood flow maps from CNN (mean ± standard deviation, 19.7 mL per 100 g/min ± 9.7) had smaller mean square errors than those determined with the conventional averaging method (43.2 ± 29.8) (P < .001). Radiologic scoring demonstrated that CNNs suppressed noise and motion and/or segmentation artifacts better than the conventional averaging method did (P < .001). Conclusion CNNs provided superior perfusion image quality and more accurate perfusion measurement compared with those of the conventional averaging method for generation of ASL images from pair-wise subtraction images. © RSNA, 2017.
Park, Sun-Young; Park, Eun-Ja; Suh, Hae Sun; Ha, Dongmun; Lee, Eui-Kyung
2017-08-01
Although nonpreference-based disease-specific measures are widely used in clinical studies, they cannot generate utilities for economic evaluation. A solution to this problem is to estimate utilities from disease-specific instruments using the mapping function. This study aimed to develop a transformation model for mapping the pruritus-visual analog scale (VAS) to the EuroQol 5-Dimension 3-Level (EQ-5D-3L) utility index in pruritus. A cross-sectional survey was conducted with a sample (n = 268) drawn from the general population of South Korea. Data were randomly divided into 2 groups, one for estimating and the other for validating mapping models. To select the best model, we developed and compared 3 separate models using demographic information and the pruritus-VAS as independent variables. The predictive performance was assessed using the mean absolute deviation and root mean square error in a separate dataset. Among the 3 models, model 2 using age, age squared, sex, and the pruritus-VAS as independent variables had the best performance based on the goodness of fit and model simplicity, with a log likelihood of 187.13. The 3 models had similar precision errors based on mean absolute deviation and root mean square error in the validation dataset. No statistically significant difference was observed between the mean observed and predicted values in all models. In conclusion, model 2 was chosen as the preferred mapping model. Outcomes measured as the pruritus-VAS can be transformed into the EQ-5D-3L utility index using this mapping model, which makes an economic evaluation possible when only pruritus-VAS data are available. © 2017 John Wiley & Sons, Ltd.
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-07-26
Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.
NASA Astrophysics Data System (ADS)
Hong, Jangho; Kawashima, Ayato; Hamada, Noriaki
2017-06-01
In this study, we developed a facile fabrication method to access a highly reproducible plasmonic surface enhanced Raman scattering substrate via the immobilization of gold nanoparticles on an Ultrafiltration (UF) membrane using a suction technique. This was combined with a simple and rapid analyte concentration and detection method utilizing portable Raman spectroscopy. The minimum detectable concentrations for aqueous thiabendazole standard solution and thiabendazole in orange extract are 0.01 μg/mL and 0.125 μg/g, respectively. The partial least squares (PLS) regression plot shows a good linear relationship between 0.001 and 100 μg/mL of analyte, with a root mean square error of prediction (RMSEP) of 0.294 and a correlation coefficient (R2) of 0.976 for the thiabendazole standard solution. Meanwhile, the PLS plot also shows a good linear relationship between 0.0 and 2.5 μg/g of analyte, with an RMSEP value of 0.298 and an R2 value of 0.993 for the orange peel extract. In addition to the detection of other types of pesticides in agricultural products, this highly uniform plasmonic substrate has great potential for application in various environmentally-related areas.
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Wang, Xuchen
2016-02-01
Hyperspectral estimation of soil organic matter (SOM) in coal mining regions is an important tool for enhancing fertilization in soil restoration programs. The correlation--partial least squares regression (PLSR) method effectively solves the information loss problem of correlation--multiple linear stepwise regression, but results of the correlation analysis must be optimized to improve precision. This study considers the relationship between spectral reflectance and SOM based on spectral reflectance curves of soil samples collected from coal mining regions. Based on the major absorption troughs in the 400-1006 nm spectral range, PLSR analysis was performed using 289 independent bands of the second derivative (SDR) with three levels and measured SOM values. A wavelet-correlation-PLSR (W-C-PLSR) model was then constructed. By amplifying useful information that was previously obscured by noise, the W-C-PLSR model was optimal for estimating SOM content, with smaller prediction errors in both calibration (R(2) = 0.970, root mean square error (RMSEC) = 3.10, and mean relative error (MREC) = 8.75) and validation (RMSEV = 5.85 and MREV = 14.32) analyses, as compared with other models. Results indicate that W-C-PLSR has great potential to estimate SOM in coal mining regions.
Implementation of neural network for color properties of polycarbonates
NASA Astrophysics Data System (ADS)
Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.
2014-05-01
In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
A multi channel physical approach for retrieving rainfall and its vertical structure from Special Sensor Microwave/Imager (SSM/I) observations is examined. While a companion paper was devoted exclusively to the description of the algorithm, its strengths, and its limitations, the main focus of this paper is to report on the results, applicability, and expected accuraciesfrom this algorithm. Some examples are given that compare retrieved results with ground-based radar data from different geographical regions to illustrate the performance and utility of the algorithm under distinct rainfall conditions. More quantitative validation is accomplished using two months of radar data from Darwin, Australia, and the radar network over Japan. Instantaneous comparisons at Darwin indicate that root-mean-square errors for 1.25 deg areas over water are 0.09 mm/h compared to the mean rainfall value of 0.224 mm/h while the correlation exceeds 0.9. Similar results are obtained over the Japanese validation site with rms errors of 0.615 mm/h compared to the mean of 0.0880 mm/h and a correlation of 0.9. Results are less encouraging over land with root-mean-square errors somewhat larger than the mean rain rates and correlations of only 0.71 and 0.62 for Darwin and Japan, respectively. These validation studies are further used in combination with the theoretical treatment of expected accuracies developed in the companion paper to define error estimates on a broader scale than individual radar sites from which the errors may be analyzed. Comparisons with simpler techniques that are based on either emission or scattering measurements are used to illustrate the fact that the current algorithm, while better correlated with the emission methods over water, cannot be reduced to either of these simpler methods.
NASA Astrophysics Data System (ADS)
Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.
2012-08-01
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
NASA Astrophysics Data System (ADS)
Bunai, Tasya; Rokhmatuloh; Wibowo, Adi
2018-05-01
In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.
NASA Astrophysics Data System (ADS)
Zimina, S. V.
2015-06-01
We present the results of statistical analysis of an adaptive antenna array tuned using the least-mean-square error algorithm with quadratic constraint on the useful-signal amplification with allowance for the weight-coefficient fluctuations. Using the perturbation theory, the expressions for the correlation function and power of the output signal of the adaptive antenna array, as well as the formula for the weight-vector covariance matrix are obtained in the first approximation. The fluctuations are shown to lead to the signal distortions at the antenna-array output. The weight-coefficient fluctuations result in the appearance of additional terms in the statistical characteristics of the antenna array. It is also shown that the weight-vector fluctuations are isotropic, i.e., identical in all directions of the weight-coefficient space.
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
The Estimation of Gestational Age at Birth in Database Studies.
Eberg, Maria; Platt, Robert W; Filion, Kristian B
2017-11-01
Studies on the safety of prenatal medication use require valid estimation of the pregnancy duration. However, gestational age is often incompletely recorded in administrative and clinical databases. Our objective was to compare different approaches to estimating the pregnancy duration. Using data from the Clinical Practice Research Datalink and Hospital Episode Statistics, we examined the following four approaches to estimating missing gestational age: (1) generalized estimating equations for longitudinal data; (2) multiple imputation; (3) estimation based on fetal birth weight and sex; and (4) conventional approaches that assigned a fixed value (39 weeks for all or 39 weeks for full term and 35 weeks for preterm). The gestational age recorded in Hospital Episode Statistics was considered the gold standard. We conducted a simulation study comparing the described approaches in terms of estimated bias and mean square error. A total of 25,929 infants from 22,774 mothers were included in our "gold standard" cohort. The smallest average absolute bias was observed for the generalized estimating equation that included birth weight, while the largest absolute bias occurred when assigning 39-week gestation to all those with missing values. The smallest mean square errors were detected with generalized estimating equations while multiple imputation had the highest mean square errors. The use of generalized estimating equations resulted in the most accurate estimation of missing gestational age when birth weight information was available. In the absence of birth weight, assignment of fixed gestational age based on term/preterm status may be the optimal approach.
Schleier, Jerome J.; Peterson, Robert K.D.; Irvine, Kathryn M.; Marshall, Lucy M.; Weaver, David K.; Preftakes, Collin J.
2012-01-01
One of the more effective ways of managing high densities of adult mosquitoes that vector human and animal pathogens is ultra-low-volume (ULV) aerosol applications of insecticides. The U.S. Environmental Protection Agency uses models that are not validated for ULV insecticide applications and exposure assumptions to perform their human and ecological risk assessments. Currently, there is no validated model that can accurately predict deposition of insecticides applied using ULV technology for adult mosquito management. In addition, little is known about the deposition and drift of small droplets like those used under conditions encountered during ULV applications. The objective of this study was to perform field studies to measure environmental concentrations of insecticides and to develop a validated model to predict the deposition of ULV insecticides. The final regression model was selected by minimizing the Bayesian Information Criterion and its prediction performance was evaluated using k-fold cross validation. Density of the formulation and the density and CMD interaction coefficients were the largest in the model. The results showed that as density of the formulation decreases, deposition increases. The interaction of density and CMD showed that higher density formulations and larger droplets resulted in greater deposition. These results are supported by the aerosol physics literature. A k-fold cross validation demonstrated that the mean square error of the selected regression model is not biased, and the mean square error and mean square prediction error indicated good predictive ability.
14 CFR 29.1323 - Airspeed indicating system.
Code of Federal Regulations, 2010 CFR
2010-01-01
... minimum practicable instrument calibration error when the corresponding pitot and static pressures are... pitot tube or an equivalent means of preventing malfunction due to icing. [Doc. No. 5084, 29 FR 16150...
Building on crossvalidation for increasing the quality of geostatistical modeling
Olea, R.A.
2012-01-01
The random function is a mathematical model commonly used in the assessment of uncertainty associated with a spatially correlated attribute that has been partially sampled. There are multiple algorithms for modeling such random functions, all sharing the requirement of specifying various parameters that have critical influence on the results. The importance of finding ways to compare the methods and setting parameters to obtain results that better model uncertainty has increased as these algorithms have grown in number and complexity. Crossvalidation has been used in spatial statistics, mostly in kriging, for the analysis of mean square errors. An appeal of this approach is its ability to work with the same empirical sample available for running the algorithms. This paper goes beyond checking estimates by formulating a function sensitive to conditional bias. Under ideal conditions, such function turns into a straight line, which can be used as a reference for preparing measures of performance. Applied to kriging, deviations from the ideal line provide sensitivity to the semivariogram lacking in crossvalidation of kriging errors and are more sensitive to conditional bias than analyses of errors. In terms of stochastic simulation, in addition to finding better parameters, the deviations allow comparison of the realizations resulting from the applications of different methods. Examples show improvements of about 30% in the deviations and approximately 10% in the square root of mean square errors between reasonable starting modelling and the solutions according to the new criteria. ?? 2011 US Government.
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
Model assessment using a multi-metric ranking technique
NASA Astrophysics Data System (ADS)
Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.
2017-12-01
Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.
Prediction of ethanol in bottled Chinese rice wine by NIR spectroscopy
NASA Astrophysics Data System (ADS)
Ying, Yibin; Yu, Haiyan; Pan, Xingxiang; Lin, Tao
2006-10-01
To evaluate the applicability of non-invasive visible and near infrared (VIS-NIR) spectroscopy for determining ethanol concentration of Chinese rice wine in square brown glass bottle, transmission spectra of 100 bottled Chinese rice wine samples were collected in the spectral range of 350-1200 nm. Statistical equations were established between the reference data and VIS-NIR spectra by partial least squares (PLS) regression method. Performance of three kinds of mathematical treatment of spectra (original spectra, first derivative spectra and second derivative spectra) were also discussed. The PLS models of original spectra turned out better results, with higher correlation coefficient in calibration (R cal) of 0.89, lower root mean standard error of calibration (RMSEC) of 0.165, and lower root mean standard error of cross validation (RMSECV) of 0.179. Using original spectra, PLS models for ethanol concentration prediction were developed. The R cal and the correlation coefficient in validation (R val) were 0.928 and 0.875, respectively; and the RMSEC and the root mean standard error of validation (RMSEP) were 0.135 (%, v v -1) and 0.177 (%, v v -1), respectively. The results demonstrated that VIS-NIR spectroscopy could be used to predict ethanol concentration in bottled Chinese rice wine.
NASA Technical Reports Server (NTRS)
Avis, L. M.; Green, R. N.; Suttles, J. T.; Gupta, S. K.
1984-01-01
Computer simulations of a least squares estimator operating on the ERBE scanning channels are discussed. The estimator is designed to minimize the errors produced by nonideal spectral response to spectrally varying and uncertain radiant input. The three ERBE scanning channels cover a shortwave band a longwave band and a ""total'' band from which the pseudo inverse spectral filter estimates the radiance components in the shortwave band and a longwave band. The radiance estimator draws on instantaneous field of view (IFOV) scene type information supplied by another algorithm of the ERBE software, and on a priori probabilistic models of the responses of the scanning channels to the IFOV scene types for given Sun scene spacecraft geometry. It is found that the pseudoinverse spectral filter is stable, tolerant of errors in scene identification and in channel response modeling, and, in the absence of such errors, yields minimum variance and essentially unbiased radiance estimates.
NASA Technical Reports Server (NTRS)
Melbourne, William G.
1986-01-01
In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
The microcomputer scientific software series 3: general linear model--analysis of variance.
Harold M. Rauscher
1985-01-01
A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...
The GEOS Ozone Data Assimilation System: Specification of Error Statistics
NASA Technical Reports Server (NTRS)
Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.
2000-01-01
A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.
Controlled sound field with a dual layer loudspeaker array
NASA Astrophysics Data System (ADS)
Shin, Mincheol; Fazi, Filippo M.; Nelson, Philip A.; Hirono, Fabio C.
2014-08-01
Controlled sound interference has been extensively investigated using a prototype dual layer loudspeaker array comprised of 16 loudspeakers. Results are presented for measures of array performance such as input signal power, directivity of sound radiation and accuracy of sound reproduction resulting from the application of conventional control methods such as minimization of error in mean squared pressure, maximization of energy difference and minimization of weighted pressure error and energy. Procedures for selecting the tuning parameters have also been introduced. With these conventional concepts aimed at the production of acoustically bright and dark zones, all the control methods used require a trade-off between radiation directivity and reproduction accuracy in the bright zone. An alternative solution is proposed which can achieve better performance based on the measures presented simultaneously by inserting a low priority zone named as the “gray” zone. This involves the weighted minimization of mean-squared errors in both bright and dark zones together with the gray zone in which the minimization error is given less importance. This results in the production of directional bright zone in which the accuracy of sound reproduction is maintained with less required input power. The results of simulations and experiments are shown to be in excellent agreement.
Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.
Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing
2009-08-21
Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.
Modeling the Atmospheric Phase Effects of a Digital Antenna Array Communications System
NASA Technical Reports Server (NTRS)
Tkacenko, A.
2006-01-01
In an antenna array system such as that used in the Deep Space Network (DSN) for satellite communication, it is often necessary to account for the effects due to the atmosphere. Typically, the atmosphere induces amplitude and phase fluctuations on the transmitted downlink signal that invalidate the assumed stationarity of the signal model. The degree to which these perturbations affect the stationarity of the model depends both on parameters of the atmosphere, including wind speed and turbulence strength, and on parameters of the communication system, such as the sampling rate used. In this article, we focus on modeling the atmospheric phase fluctuations in a digital antenna array communications system. Based on a continuous-time statistical model for the atmospheric phase effects, we show how to obtain a related discrete-time model based on sampling the continuous-time process. The effects of the nonstationarity of the resulting signal model are investigated using the sample matrix inversion (SMI) algorithm for minimum mean-squared error (MMSE) equalization of the received signal
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation
NASA Astrophysics Data System (ADS)
Sekhar, S. Chandra; Sreenivas, TV
2004-12-01
We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).
NASA Astrophysics Data System (ADS)
Imani, Moslem; You, Rey-Jer; Kuo, Chung-Yen
2014-10-01
Sea level forecasting at various time intervals is of great importance in water supply management. Evolutionary artificial intelligence (AI) approaches have been accepted as an appropriate tool for modeling complex nonlinear phenomena in water bodies. In the study, we investigated the ability of two AI techniques: support vector machine (SVM), which is mathematically well-founded and provides new insights into function approximation, and gene expression programming (GEP), which is used to forecast Caspian Sea level anomalies using satellite altimetry observations from June 1992 to December 2013. SVM demonstrates the best performance in predicting Caspian Sea level anomalies, given the minimum root mean square error (RMSE = 0.035) and maximum coefficient of determination (R2 = 0.96) during the prediction periods. A comparison between the proposed AI approaches and the cascade correlation neural network (CCNN) model also shows the superiority of the GEP and SVM models over the CCNN.
MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim
2018-01-01
A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.
NASA Astrophysics Data System (ADS)
Natarajan, Ajay; Hansen, John H. L.; Arehart, Kathryn Hoberg; Rossi-Katz, Jessica
2005-12-01
This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT) in conjunction with a modified generalized minimum mean square error estimator (GMMSE) for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs). Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH) and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI). Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito), along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.
Mishra, Vishal
2015-01-01
The interchange of the protons with the cell wall-bound calcium and magnesium ions at the interface of solution/bacterial cell surface in the biosorption system at various concentrations of protons has been studied in the present work. A mathematical model for establishing the correlation between concentration of protons and active sites was developed and optimized. The sporadic limited residence time reactor was used to titrate the calcium and magnesium ions at the individual data point. The accuracy of the proposed mathematical model was estimated using error functions such as nonlinear regression, adjusted nonlinear regression coefficient, the chi-square test, P-test and F-test. The values of the chi-square test (0.042-0.017), P-test (<0.001-0.04), sum of square errors (0.061-0.016), root mean square error (0.01-0.04) and F-test (2.22-19.92) reported in the present research indicated the suitability of the model over a wide range of proton concentrations. The zeta potential of the bacterium surface at various concentrations of protons was observed to validate the denaturation of active sites.
Modeling error analysis of stationary linear discrete-time filters
NASA Technical Reports Server (NTRS)
Patel, R.; Toda, M.
1977-01-01
The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.
Sloat, J.V.; Gain, W.S.
1995-01-01
Index-velocity data collected with acoustic velocity meters, stage data, and cross-sectional area data were used to calculate discharge at three low-velocity, tidal streamflow stations in north-east Florida. Discharge at three streamflow stations was computed as the product of the channel cross-sectional area and the mean velocity as determined from an index velocity measured in the stream using an acoustic velocity meter. The tidal streamlflow stations used in the study were: Six Mile Creek near Picolata, Fla.; Dunns Creek near Satsuma, Fla.; and the St. Johns River at Buffalo Bluff. Cross-sectional areas at the measurement sections ranged from about 3,000 square feet at Six Mile Creek to about 18,500 square feet at St. Johns River at Buffalo Bluff. Physical characteristics for all three streams were similar except for drainage area. The topography primarily is low-relief, swampy terrain; stream velocities ranged from about -2 to 2 feet per second; and the average change in stage was about 1 foot. Instantaneous discharge was measured using a portable acoustic current meter at each of the three streams to develop a relation between the mean velocity in the stream and the index velocity measured by the acoustic velocity meter. Using least-squares linear regression, a simple linear relation between mean velocity and index velocity was determined. Index velocity was the only significant linear predictor of mean velocity for Six Mile Creek and St. Johns River at Buffalo Bluff. For Dunns Creek, both index velocity and stage were used to develop a multiple-linear predictor of mean velocity. Stage-area curves for each stream were developed from bathymetric data. Instantaneous discharge was computed by multiplying results of relations developed for cross-sectional area and mean velocity. Principal sources of error in the estimated discharge are identified as: (1) instrument errors associated with measurement of stage and index velocity, (2) errors in the representation of mean daily stage and index velocity due to natural variability over time and space, and (3) errors in cross-sectional area and mean-velocity ratings based on stage and index velocity. Standard errors for instantaneous discharge for the median cross-sectional area for Six Mile Creek, Dunns Creek, and St. Johns River at Buffalo Bluff were 94,360, and 1,980 cubic feet per second, respectively. Standard errors for mean daily discharge for the median cross-sectional area for Six Mile Creek, Dunns Creek, and St. Johns River at Buffalo Bluff were 25, 65, and 455 cubic feet per second, respectively. Mean daily discharge at the three sites ranged from about -500 to 1,500 cubic feet per second at Six Mile Creek and Dunns Creek and from about -500 to 15,000 cubic feet per second on the St. Johns River at Buffalo Bluff. For periods of high discharge, the AVM index-velocity method tended to produce estimates accurate with 2 to 6 percent. For periods of moderate discharge, errors in discharge may increase to more than 50 percent. At low flows, errors as a percentage of discharge increase toward infinity.
Statistical summaries of water-quality data for two coal areas of Jackson County, Colorado
Kuhn, Gerhard
1982-01-01
Statistical summaries of water-quality data are compiled for eight streams in two separate coal areas of Jackson County, Colo. The quality-of-water data were collected from October 1976 to September 1980. For inorganic constituents, the maximum, minimum, and mean concentrations, as well as other statistics are presented; for minor elements, only the maximum, minimum, and mean values are included. Least-squares equations (regressions) are also given relating specific conductance of the streams to the concentration of the major ions. The observed range of specific conductance was 85 to 1,150 micromhos per centimeter for the eight sites. (USGS)
Marchetti, Bárbara V; Candotti, Cláudia T; Raupp, Eduardo G; Oliveira, Eduardo B C; Furlanetto, Tássia S; Loss, Jefferson F
The purpose of this study was to assess a radiographic method for spinal curvature evaluation in children, based on spinous processes, and identify its normality limits. The sample consisted of 90 radiographic examinations of the spines of children in the sagittal plane. Thoracic and lumbar curvatures were evaluated using angular (apex angle [AA]) and linear (sagittal arrow [SA]) measurements based on the spinous processes. The same curvatures were also evaluated using the Cobb angle (CA) method, which is considered the gold standard. For concurrent validity (AA vs CA), Pearson's product-moment correlation coefficient, root-mean-square error, Pitman- Morgan test, and Bland-Altman analysis were used. For reproducibility (AA, SA, and CA), the intraclass correlation coefficient, standard error of measurement, and minimal detectable change measurements were used. A significant correlation was found between CA and AA measurements, as was a low root-mean-square error. The mean difference between the measurements was 0° for thoracic and lumbar curvatures, and the mean standard deviations of the differences were ±5.9° and 6.9°, respectively. The intraclass correlation coefficients of AA and SA were similar to or higher than the gold standard (CA). The standard error of measurement and minimal detectable change of the AA were always lower than the CA. This study determined the concurrent validity, as well as intra- and interrater reproducibility, of the radiographic measurements of kyphosis and lordosis in children. Copyright © 2017. Published by Elsevier Inc.
Koch, Cosima; Posch, Andreas E; Goicoechea, Héctor C; Herwig, Christoph; Lendl, Bernhard
2014-01-07
This paper presents the quantification of Penicillin V and phenoxyacetic acid, a precursor, inline during Pencillium chrysogenum fermentations by FTIR spectroscopy and partial least squares (PLS) regression and multivariate curve resolution - alternating least squares (MCR-ALS). First, the applicability of an attenuated total reflection FTIR fiber optic probe was assessed offline by measuring standards of the analytes of interest and investigating matrix effects of the fermentation broth. Then measurements were performed inline during four fed-batch fermentations with online HPLC for the determination of Penicillin V and phenoxyacetic acid as reference analysis. PLS and MCR-ALS models were built using these data and validated by comparison of single analyte spectra with the selectivity ratio of the PLS models and the extracted spectral traces of the MCR-ALS models, respectively. The achieved root mean square errors of cross-validation for the PLS regressions were 0.22 g L(-1) for Penicillin V and 0.32 g L(-1) for phenoxyacetic acid and the root mean square errors of prediction for MCR-ALS were 0.23 g L(-1) for Penicillin V and 0.15 g L(-1) for phenoxyacetic acid. A general work-flow for building and assessing chemometric regression models for the quantification of multiple analytes in bioprocesses by FTIR spectroscopy is given. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping
2005-11-01
The nondestructive method for quantifying sugar content (SC) and available acid (VA) of intact apples using diffuse near infrared reflectance and optical fiber sensing techniques were explored in the present research. The standard sample sets and prediction models were established by partial least squares analysis (PLS). A total of 120 Shandong Fuji apples were tested in the wave number of 12,500 - 4000 cm(-1) using Fourier transform near infrared spectroscopy. The results of the research indicated that the nondestructive quantification of SC and VA, gave a high correlation coefficient 0.970 and 0.906, a low root mean square error of prediction (RMSEP) 0.272 and 0.056 2, a low root mean square error of calibration (RMSEC) 0.261 and 0.0677, and a small difference between RMSEP and RMSEC 0.011 a nd 0.0115. It was suggested that the diffuse nearinfrared reflectance technique be feasible for nondestructive determination of apple sugar content in the wave number range of 10,341 - 5461 cm(-1) and for available acid in the wave number range of 10,341 - 3818 cm(-1).
Waskitho, Dri; Lukitaningsih, Endang; Sudjadi; Rohman, Abdul
2016-01-01
Analysis of lard extracted from lipstick formulation containing castor oil has been performed using FTIR spectroscopic method combined with multivariate calibration. Three different extraction methods were compared, namely saponification method followed by liquid/liquid extraction with hexane/dichlorometane/ethanol/water, saponification method followed by liquid/liquid extraction with dichloromethane/ethanol/water, and Bligh & Dyer method using chloroform/methanol/water as extracting solvent. Qualitative and quantitative analysis of lard were performed using principle component (PCA) and partial least square (PLS) analysis, respectively. The results showed that, in all samples prepared by the three extraction methods, PCA was capable of identifying lard at wavelength region of 1200-800 cm -1 with the best result was obtained by Bligh & Dyer method. Furthermore, PLS analysis at the same wavelength region used for qualification showed that Bligh and Dyer was the most suitable extraction method with the highest determination coefficient (R 2 ) and the lowest root mean square error of calibration (RMSEC) as well as root mean square error of prediction (RMSEP) values.
Fadzillah, Nurrulhidayah Ahmad; Man, Yaakob bin Che; Rohman, Abdul; Rosman, Arieff Salleh; Ismail, Amin; Mustafa, Shuhaimi; Khatib, Alfi
2015-01-01
The authentication of food products from the presence of non-allowed components for certain religion like lard is very important. In this study, we used proton Nuclear Magnetic Resonance ((1)H-NMR) spectroscopy for the analysis of butter adulterated with lard by simultaneously quantification of all proton bearing compounds, and consequently all relevant sample classes. Since the spectra obtained were too complex to be analyzed visually by the naked eyes, the classification of spectra was carried out.The multivariate calibration of partial least square (PLS) regression was used for modelling the relationship between actual value of lard and predicted value. The model yielded a highest regression coefficient (R(2)) of 0.998 and the lowest root mean square error calibration (RMSEC) of 0.0091% and root mean square error prediction (RMSEP) of 0.0090, respectively. Cross validation testing evaluates the predictive power of the model. PLS model was shown as good models as the intercept of R(2)Y and Q(2)Y were 0.0853 and -0.309, respectively.
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Li, Libo; Bentler, Peter M
2011-06-01
MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of approximation (RMSEA) pairs. In this article, we develop a new method that quantifies those chosen RMSEA pairs and allows a quantitative comparison of them. Our method proposes the use of single RMSEA values to replace the choice of RMSEA pairs for model comparison and power analysis, thus avoiding the differential meaning of the chosen RMSEA pairs inherent in the approach of MacCallum et al. (2006). With this choice, the conventional cutoff values in model overall evaluation can directly be transferred and applied to the evaluation and power analysis of model differences. © 2011 American Psychological Association
Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl
2016-10-01
The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.
Yehia, Ali M; Mohamed, Heba M
2016-01-05
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.
Stack Number Influence on the Accuracy of Aster Gdem (V2)
NASA Astrophysics Data System (ADS)
Mirzadeh, S. M. J.; Alizadeh Naeini, A.; Fatemi, S. B.
2017-09-01
In this research, the influence of stack number (STKN) on the accuracy of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM (GDEM) has been investigated. For this purpose, two data sets of ASTER and Reference DEMs from two study areas with various topography (Bomehen and Tazehabad) were used. The Results show that in both study areas, STKN of 19 results in minimum error so that this minimum error has small difference with other STKN. The analysis of slope, STKN, and error values shows that there is no strong correlation between these parameters in both study areas. For example, the value of mean absolute error increase by changing the topography and the increase of slope values and height on cells but, the changes in STKN has no important effect on error values. Furthermore, according to high values of STKN, effect of slope on elevation accuracy has practically decreased. Also, there is no great correlation between the residual and STKN in ASTER GDEM.
Mendyk, Aleksander; Güres, Sinan; Szlęk, Jakub; Wiśniowska, Barbara; Kleinebudde, Peter
2015-01-01
The purpose of this work was to develop a mathematical model of the drug dissolution (Q) from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of Q versus extrudate diameter (d) and the time variable (t) and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations' parameters. Two inputs were found important for the drug dissolution: d and t. The extrudates length (L) was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of Q versus d and t resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs' black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies. PMID:26101544
Mendyk, Aleksander; Güres, Sinan; Jachowicz, Renata; Szlęk, Jakub; Polak, Sebastian; Wiśniowska, Barbara; Kleinebudde, Peter
2015-01-01
The purpose of this work was to develop a mathematical model of the drug dissolution (Q) from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of Q versus extrudate diameter (d) and the time variable (t) and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations' parameters. Two inputs were found important for the drug dissolution: d and t. The extrudates length (L) was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of Q versus d and t resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs' black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies.
Edge Modeling by Two Blur Parameters in Varying Contrasts.
Seo, Suyoung
2018-06-01
This paper presents a method of modeling edge profiles with two blur parameters, and estimating and predicting those edge parameters with varying brightness combinations and camera-to-object distances (COD). First, the validity of the edge model is proven mathematically. Then, it is proven experimentally with edges from a set of images captured for specifically designed target sheets and with edges from natural images. Estimation of the two blur parameters for each observed edge profile is performed with a brute-force method to find parameters that produce global minimum errors. Then, using the estimated blur parameters, actual blur parameters of edges with arbitrary brightness combinations are predicted using a surface interpolation method (i.e., kriging). The predicted surfaces show that the two blur parameters of the proposed edge model depend on both dark-side edge brightness and light-side edge brightness following a certain global trend. This is similar across varying CODs. The proposed edge model is compared with a one-blur parameter edge model using experiments of the root mean squared error for fitting the edge models to each observed edge profile. The comparison results suggest that the proposed edge model has superiority over the one-blur parameter edge model in most cases where edges have varying brightness combinations.
Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization
Kanaris, Loizos; Kokkinis, Akis; Liotta, Antonio; Stavrou, Stavros
2017-01-01
Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K-Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors. PMID:28394268
Fusing Bluetooth Beacon Data with Wi-Fi Radiomaps for Improved Indoor Localization.
Kanaris, Loizos; Kokkinis, Akis; Liotta, Antonio; Stavrou, Stavros
2017-04-10
Indoor user localization and tracking are instrumental to a broad range of services and applications in the Internet of Things (IoT) and particularly in Body Sensor Networks (BSN) and Ambient Assisted Living (AAL) scenarios. Due to the widespread availability of IEEE 802.11, many localization platforms have been proposed, based on the Wi-Fi Received Signal Strength (RSS) indicator, using algorithms such as K -Nearest Neighbour (KNN), Maximum A Posteriori (MAP) and Minimum Mean Square Error (MMSE). In this paper, we introduce a hybrid method that combines the simplicity (and low cost) of Bluetooth Low Energy (BLE) and the popular 802.11 infrastructure, to improve the accuracy of indoor localization platforms. Building on KNN, we propose a new positioning algorithm (dubbed i-KNN) which is able to filter the initial fingerprint dataset (i.e., the radiomap), after considering the proximity of RSS fingerprints with respect to the BLE devices. In this way, i-KNN provides an optimised small subset of possible user locations, based on which it finally estimates the user position. The proposed methodology achieves fast positioning estimation due to the utilization of a fragment of the initial fingerprint dataset, while at the same time improves positioning accuracy by minimizing any calculation errors.
New equations improve NIR prediction of body fat among high school wrestlers.
Oppliger, R A; Clark, R R; Nielsen, D H
2000-09-01
Methodologic study to derive prediction equations for percent body fat (%BF). To develop valid regression equations using NIR to assess body composition among high school wrestlers. Clinicians need a portable, fast, and simple field method for assessing body composition among wrestlers. Near-infrared photospectrometry (NIR) meets these criteria, but its efficacy has been challenged. Subjects were 150 high school wrestlers from 2 Midwestern states with mean +/- SD age of 16.3 +/- 1.1 yrs, weight of 69.5 +/- 11.7 kg, and height of 174.4 +/- 7.0 cm. Relative body fatness (%BF) determined from hydrostatic weighing was the criterion measure, and NIR optical density (OD) measurements at multiple sites, plus height, weight, and body mass index (BMI) were the predictor variables. Four equations were developed with multiple R2s that varied from .530 to .693, root mean squared errors varied from 2.8% BF to 3.4% BF, and prediction errors varied from 2.9% BF to 3.1% BF. The best equation used OD measurements at the biceps, triceps, and thigh sites, BMI, and age. The root mean squared error and prediction error for all 4 equations were equal to or smaller than for a skinfold equation commonly used with wrestlers. The results substantiate the validity of NIR for predicting % BF among high school wrestlers. Cross-validation of these equations is warranted.
NASA Astrophysics Data System (ADS)
Lin, Jiang; Miao, Chiyuan
2017-04-01
Climate change is considered to be one of the greatest environmental threats. This has urged scientific communities to focus on the hot topic. Global climate models (GCMs) are the primary tool used for studying climate change. However, GCMs are limited because of their coarse spatial resolution and inability to resolve important sub-grid scale features such as terrain and clouds. Statistical downscaling methods can be used to downscale large-scale variables to local-scale. In this study, we assess the applicability of the widely used Statistical Downscaling Model (SDSM) for the Loess Plateau, China. The observed variables included daily mean temperature (TMEAN), maximum temperature (TMAX) and minimum temperature (TMIN) from 1961 to 2005. The and the daily atmospheric data were taken from reanalysis data from 1961 to 2005, and global climate model outputs from Beijing Normal University Earth System Model (BNU-ESM) from 1961 to 2099 and from observations . The results show that SDSM performs well for these three climatic variables on the Loess Plateau. After downscaling, the root mean square errors for TMEAN, TMAX, TMIN for BNU-ESM were reduced by 70.9%, 75.1%, and 67.2%, respectively. All the rates of change in TMEAN, TMAX and TMIN during the 21st century decreased after SDSM downscaling. We also show that SDSM can effectively reduce uncertainty, compared with the raw model outputs. TMEAN uncertainty was reduced by 27.1%, 26.8%, and 16.3% for the future scenarios of RCP 2.6, RCP 4.5 and RCP 8.5, respectively. The corresponding reductions in uncertainty were 23.6%, 30.7%, and 18.7% for TMAX, ; and 37.6%, 31.8%, and 23.2% for TMIN.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
Taming stochastic bifurcations in fractional-order systems via noise and delayed feedback
NASA Astrophysics Data System (ADS)
Sun, Zhongkui; Zhang, Jintian; Yang, Xiaoli; Xu, Wei
2017-08-01
The dynamics in fractional-order systems have been widely studied during the past decade due to the potential applications in new materials and anomalous diffusions, but the investigations have been so far restricted to a fractional-order system without time delay(s). In this paper, we report the first study of random responses of fractional-order system coupled with noise and delayed feedback. Stochastic averaging method has been utilized to determine the stationary probability density functions (PDFs) by means of the principle of minimum mean-square error, based on which stochastic bifurcations could be identified through recognizing the shape of the PDFs. It has been found that by changing the fractional order the shape of the PDFs can switch from unimodal distribution to bimodal one, or from bimodal distribution to unimodal one, thus announcing the onset of stochastic bifurcation. Further, we have demonstrated that by merely modulating the time delay, the feedback strengths, or the noise intensity, the shapes of PDFs can transit between a single peak and a double peak. Therefore, it provides an efficient candidate to control, say, induce or suppress, the stochastic bifurcations in fractional-order systems.
NASA Astrophysics Data System (ADS)
Tanaka, Shinobu; Hayakawa, Yuuto; Ogawa, Mitsuhiro; Yamakoshi, Ken-ichi
2010-08-01
We have been developing a new technique for measuring urine glucose concentration using near infrared spectroscopy (NIRS) in conjunction with the Partial Least Square (PLS) method. In the previous study, we reported some results of preliminary experiments for assessing feasibility of this method using a FT-IR spectrometer. In this study, considering practicability of the system, a flow-through cell with the optical path length of 10 mm was newly introduced. Accuracy of the system was verified by the preliminary experiments using urine samples. From the results obtained, it was clearly demonstrated that the present method had a capability of predicting individual urine glucose level with reasonable accuracy (the minimum value of standard error of prediction: SEP = 22.3 mg/dl) and appeared to be a useful means for long-term home health care. However, mean value of SEP obtained by the urine samples from ten subjects was not satisfactorily low (53.7 mg/dl). For improving the accuracy, (1) mechanical stability of the optical system should be improved, (2) the method for normalizing the spectrum should be reconsidered, and (3) the number of subject should be increased.
Quantitative evaluation of performance of three-dimensional printed lenses
NASA Astrophysics Data System (ADS)
Gawedzinski, John; Pawlowski, Michal E.; Tkaczyk, Tomasz S.
2017-08-01
We present an analysis of the shape, surface quality, and imaging capabilities of custom three-dimensional (3-D) printed lenses. 3-D printing technology enables lens prototypes to be fabricated without restrictions on surface geometry. Thus, spherical, aspherical, and rotationally nonsymmetric lenses can be manufactured in an integrated production process. This technique serves as a noteworthy alternative to multistage, labor-intensive, abrasive processes, such as grinding, polishing, and diamond turning. Here, we evaluate the quality of lenses fabricated by Luxexcel using patented Printoptical©; technology that is based on an inkjet printing technique by comparing them to lenses made with traditional glass processing technologies (grinding, polishing, etc.). The surface geometry and roughness of the lenses were evaluated using white-light and Fizeau interferometers. We have compared peak-to-valley wavefront deviation, root mean square (RMS) wavefront error, radii of curvature, and the arithmetic roughness average (Ra) profile of plastic and glass lenses. In addition, the imaging performance of selected pairs of lenses was tested using 1951 USAF resolution target. The results indicate performance of 3-D printed optics that could be manufactured with surface roughness comparable to that of injection molded lenses (Ra<20 nm). The RMS wavefront error of 3-D printed prototypes was at a minimum 18.8 times larger than equivalent glass prototypes for a lens with a 12.7 mm clear aperture, but, when measured within 63% of its clear aperture, the 3-D printed components' RMS wavefront error was comparable to glass lenses.
Quantitative evaluation of performance of 3D printed lenses
Gawedzinski, John; Pawlowski, Michal E.; Tkaczyk, Tomasz S.
2017-01-01
We present an analysis of the shape, surface quality, and imaging capabilities of custom 3D printed lenses. 3D printing technology enables lens prototypes to be fabricated without restrictions on surface geometry. Thus, spherical, aspherical and rotationally non-symmetric lenses can be manufactured in an integrated production process. This technique serves as a noteworthy alternative to multistage, labor-intensive, abrasive processes such as grinding, polishing and diamond turning. Here, we evaluate the quality of lenses fabricated by Luxexcel using patented Printoptical© technology that is based on an inkjet printing technique by comparing them to lenses made with traditional glass processing technologies (grinding, polishing etc.). The surface geometry and roughness of the lenses were evaluated using white-light and Fizeau interferometers. We have compared peak-to-valley wavefront deviation, root-mean-squared wavefront error, radii of curvature and the arithmetic average of the roughness profile (Ra) of plastic and glass lenses. Additionally, the imaging performance of selected pairs of lenses was tested using 1951 USAF resolution target. The results indicate performance of 3D printed optics that could be manufactured with surface roughness comparable to that of injection molded lenses (Ra < 20 nm). The RMS wavefront error of 3D printed prototypes was at a minimum 18.8 times larger than equivalent glass prototypes for a lens with a 12.7 mm clear aperture, but when measured within 63% of its clear aperture, 3D printed components’ RMS wavefront error was comparable to glass lenses. PMID:29238114
Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis
2017-03-01
A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.
1978-12-01
The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared
Effects of Linking Methods on Detection of DIF.
ERIC Educational Resources Information Center
Kim, Seock-Ho; Cohen, Allan S.
1992-01-01
Effects of the following methods for linking metrics on detection of differential item functioning (DIF) were compared: (1) test characteristic curve method (TCC); (2) weighted mean and sigma method; and (3) minimum chi-square method. With large samples, results were essentially the same. With small samples, TCC was most accurate. (SLD)
Instrumental variables vs. grouping approach for reducing bias due to measurement error.
Batistatou, Evridiki; McNamee, Roseanne
2008-01-01
Attenuation of the exposure-response relationship due to exposure measurement error is often encountered in epidemiology. Given that error cannot be totally eliminated, bias correction methods of analysis are needed. Many methods require more than one exposure measurement per person to be made, but the `group mean OLS method,' in which subjects are grouped into several a priori defined groups followed by ordinary least squares (OLS) regression on the group means, can be applied with one measurement. An alternative approach is to use an instrumental variable (IV) method in which both the single error-prone measure and an IV are used in IV analysis. In this paper we show that the `group mean OLS' estimator is equal to an IV estimator with the group mean used as IV, but that the variance estimators for the two methods are different. We derive a simple expression for the bias in the common estimator which is a simple function of group size, reliability and contrast of exposure between groups, and show that the bias can be very small when group size is large. We compare this method with a new proposal (group mean ranking method), also applicable with a single exposure measurement, in which the IV is the rank of the group means. When there are two independent exposure measurements per subject, we propose a new IV method (EVROS IV) and compare it with Carroll and Stefanski's (CS IV) proposal in which the second measure is used as an IV; the new IV estimator combines aspects of the `group mean' and `CS' strategies. All methods are evaluated in terms of bias, precision and root mean square error via simulations and a dataset from occupational epidemiology. The `group mean ranking method' does not offer much improvement over the `group mean method.' Compared with the `CS' method, the `EVROS' method is less affected by low reliability of exposure. We conclude that the group IV methods we propose may provide a useful way to handle mismeasured exposures in epidemiology with or without replicate measurements. Our finding may also have implications for the use of aggregate variables in epidemiology to control for unmeasured confounding.
A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality
NASA Astrophysics Data System (ADS)
Cheung, KW; So, HC; Ma, W.-K.; Chan, YT
2006-12-01
The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.
Prediction of BP reactivity to talking using hybrid soft computing approaches.
Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar
2014-01-01
High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
A successive overrelaxation iterative technique for an adaptive equalizer
NASA Technical Reports Server (NTRS)
Kosovych, O. S.
1973-01-01
An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.
Damage level prediction of non-reshaped berm breakwater using ANN, SVM and ANFIS models
NASA Astrophysics Data System (ADS)
Mandal, Sukomal; Rao, Subba; N., Harish; Lokesha
2012-06-01
The damage analysis of coastal structure is very important as it involves many design parameters to be considered for the better and safe design of structure. In the present study experimental data for non-reshaped berm breakwater are collected from Marine Structures Laboratory, Department of Applied Mechanics and Hydraulics, NITK, Surathkal, India. Soft computing techniques like Artificial Neural Network (ANN), Support Vector Machine (SVM) and Adaptive Neuro Fuzzy Inference system (ANFIS) models are constructed using experimental data sets to predict the damage level of non-reshaped berm breakwater. The experimental data are used to train ANN, SVM and ANFIS models and results are determined in terms of statistical measures like mean square error, root mean square error, correla-tion coefficient and scatter index. The result shows that soft computing techniques i.e., ANN, SVM and ANFIS can be efficient tools in predicting damage levels of non reshaped berm breakwater.
Hazard Function Estimation with Cause-of-Death Data Missing at Random
Wang, Qihua; Dinse, Gregg E.; Liu, Chunling
2010-01-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874
pKa prediction of monoprotic small molecules the SMARTS way.
Lee, Adam C; Yu, Jing-Yu; Crippen, Gordon M
2008-10-01
Realizing favorable absorption, distribution, metabolism, elimination, and toxicity profiles is a necessity due to the high attrition rate of lead compounds in drug development today. The ability to accurately predict bioavailability can help save time and money during the screening and optimization processes. As several robust programs already exist for predicting logP, we have turned our attention to the fast and robust prediction of pK(a) for small molecules. Using curated data from the Beilstein Database and Lange's Handbook of Chemistry, we have created a decision tree based on a novel set of SMARTS strings that can accurately predict the pK(a) for monoprotic compounds with R(2) of 0.94 and root mean squared error of 0.68. Leave-some-out (10%) cross-validation achieved Q(2) of 0.91 and root mean squared error of 0.80.
NASA Astrophysics Data System (ADS)
Dikmen, Erkan; Ayaz, Mahir; Gül, Doğan; Şahin, Arzu Şencan
2017-07-01
The determination of drying behavior of herbal plants is a complex process. In this study, gene expression programming (GEP) model was used to determine drying behavior of herbal plants as fresh sweet basil, parsley and dill leaves. Time and drying temperatures are input parameters for the estimation of moisture ratio of herbal plants. The results of the GEP model are compared with experimental drying data. The statistical values as mean absolute percentage error, root-mean-squared error and R-square are used to calculate the difference between values predicted by the GEP model and the values actually observed from the experimental study. It was found that the results of the GEP model and experimental study are in moderately well agreement. The results have shown that the GEP model can be considered as an efficient modelling technique for the prediction of moisture ratio of herbal plants.
Achievable accuracy of hip screw holding power estimation by insertion torque measurement.
Erani, Paolo; Baleani, Massimiliano
2018-02-01
To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.
Li, Kaiyue; Wang, Weiying; Liu, Yanping; Jiang, Su; Huang, Guo; Ye, Liming
2017-01-01
The active ingredients and thus pharmacological efficacy of traditional Chinese medicine (TCM) at different degrees of parching process vary greatly. Near-infrared spectroscopy (NIR) was used to develop a new method for rapid online analysis of TCM parching process, using two kinds of chemical indicators (5-(hydroxymethyl) furfural [5-HMF] content and 420 nm absorbance) as reference values which were obviously observed and changed in most TCM parching process. Three representative TCMs, Areca ( Areca catechu L.), Malt ( Hordeum Vulgare L.), and Hawthorn ( Crataegus pinnatifida Bge.), were used in this study. With partial least squares regression, calibration models of NIR were generated based on two kinds of reference values, i.e. 5-HMF contents measured by high-performance liquid chromatography (HPLC) and 420 nm absorbance measured by ultraviolet-visible spectroscopy (UV/Vis), respectively. In the optimized models for 5-HMF, the root mean square errors of prediction (RMSEP) for Areca, Malt, and Hawthorn was 0.0192, 0.0301, and 0.2600 and correlation coefficients ( R cal ) were 99.86%, 99.88%, and 99.88%, respectively. Moreover, in the optimized models using 420 nm absorbance as reference values, the RMSEP for Areca, Malt, and Hawthorn was 0.0229, 0.0096, and 0.0409 and R cal were 99.69%, 99.81%, and 99.62%, respectively. NIR models with 5-HMF content and 420 nm absorbance as reference values can rapidly and effectively identify three kinds of TCM in different parching processes. This method has great promise to replace current subjective color judgment and time-consuming HPLC or UV/Vis methods and is suitable for rapid online analysis and quality control in TCM industrial manufacturing process. Near-infrared spectroscopy.(NIR) was used to develop a new method for online analysis of traditional Chinese medicine.(TCM) parching processCalibration and validation models of Areca, Malt, and Hawthorn were generated by partial least squares regression using 5.(hydroxymethyl) furfural contents and 420.nm absorbance as reference values, respectively, which were main indicator components during parching process of most TCMThe established NIR models of three TCMs had low root mean square errors of prediction and high correlation coefficientsThe NIR method has great promise for use in TCM industrial manufacturing processes for rapid online analysis and quality control. Abbreviations used: NIR: Near-infrared Spectroscopy; TCM: Traditional Chinese medicine; Areca: Areca catechu L.; Hawthorn: Crataegus pinnatifida Bge.; Malt: Hordeum vulgare L.; 5-HMF: 5-(hydroxymethyl) furfural; PLS: Partial least squares; D: Dimension faction; SLS: Straight line subtraction, MSC: Multiplicative scatter correction; VN: Vector normalization; RMSECV: Root mean square errors of cross-validation; RMSEP: Root mean square errors of validation; R cal : Correlation coefficients; RPD: Residual predictive deviation; PAT: Process analytical technology; FDA: Food and Drug Administration; ICH: International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use.
Esteki, M; Nouroozi, S; Shahsavari, Z
2016-02-01
To develop a simple and efficient spectrophotometric technique combined with chemometrics for the simultaneous determination of methyl paraben (MP) and hydroquinone (HQ) in cosmetic products, and specifically, to: (i) evaluate the potential use of successive projections algorithm (SPA) to derivative spectrophotometric data in order to provide sufficient accuracy and model robustness and (ii) determine MP and HQ concentration in cosmetics without tedious pre-treatments such as derivatization or extraction techniques which are time-consuming and require hazardous solvents. The absorption spectra were measured in the wavelength range of 200-350 nm. Prior to performing chemometric models, the original and first-derivative absorption spectra of binary mixtures were used as calibration matrices. Variable selected by successive projections algorithm was used to obtain multiple linear regression (MLR) models based on a small subset of wavelengths. The number of wavelengths and the starting vector were optimized, and the comparison of the root mean square error of calibration (RMSEC) and cross-validation (RMSECV) was applied to select effective wavelengths with the least collinearity and redundancy. Principal component regression (PCR) and partial least squares (PLS) were also developed for comparison. The concentrations of the calibration matrix ranged from 0.1 to 20 μg mL(-1) for MP, and from 0.1 to 25 μg mL(-1) for HQ. The constructed models were tested on an external validation data set and finally cosmetic samples. The results indicated that successive projections algorithm-multiple linear regression (SPA-MLR), applied on the first-derivative spectra, achieved the optimal performance for two compounds when compared with the full-spectrum PCR and PLS. The root mean square error of prediction (RMSEP) was 0.083, 0.314 for MP and HQ, respectively. To verify the accuracy of the proposed method, a recovery study on real cosmetic samples was carried out with satisfactory results (84-112%). The proposed method, which is an environmentally friendly approach, using minimum amount of solvent, is a simple, fast and low-cost analysis method that can provide high accuracy and robust models. The suggested method does not need any complex extraction procedure which is time-consuming and requires hazardous solvents. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
NASA Astrophysics Data System (ADS)
Singh, K. S.; Bhaskaran, Prasad K.
2017-12-01
This study evaluates the performance of the Advanced Research Weather Research and Forecasting (WRF-ARW) model for prediction of land-falling Bay of Bengal (BoB) tropical cyclones (TCs). Model integration was performed using two-way interactive double nested domains at 27 and 9 km resolutions. The present study comprises two major components. Firstly, the study explores the impact of five different planetary boundary layer (PBL) and six cumulus convection (CC) schemes on seven land-falling BoB TCs. A total of 85 numerical simulations were studied in detail, and the results signify that the model simulated better both the track and intensity by using a combination of Yonsei University (YSU) PBL and the old simplified Arakawa-Schubert CC scheme. Secondly, the study also investigated the model performance based on the best possible combinations of model physics on the real-time forecasts of four BoB cyclones (Phailin, Helen, Lehar, and Madi) that made landfall during 2013 based on another 15 numerical simulations. The predicted mean track error during 2013 was about 71 km, 114 km, 133 km, 148 km, and 130 km respectively from day-1 to day-5. The Root Mean Square Error (RMSE) for Minimum Central Pressure (MCP) was about 6 hPa and the same noticed for Maximum Surface Wind (MSW) was about 4.5 m s-1 noticed during the entire simulation period. In addition the study also reveals that the predicted track errors during 2013 cyclones improved respectively by 43%, 44%, and 52% from day-1 to day-3 as compared to cyclones simulated during the period 2006-2011. The improvements noticed can be attributed due to relatively better quality data that was specified for the initial mean position error (about 48 km) during 2013. Overall the study signifies that the track and intensity forecast for 2013 cyclones using the specified combinations listed in the first part of this study performed relatively better than the other NWP (Numerical Weather Prediction) models, and thereby finds application in real-time forecast.
Wood, Clive; Alwati, Abdolati; Halsey, Sheelagh; Gough, Tim; Brown, Elaine; Kelly, Adrian; Paradkar, Anant
2016-09-10
The use of near infra red spectroscopy to predict the concentration of two pharmaceutical co-crystals; 1:1 ibuprofen-nicotinamide (IBU-NIC) and 1:1 carbamazepine-nicotinamide (CBZ-NIC) has been evaluated. A partial least squares (PLS) regression model was developed for both co-crystal pairs using sets of standard samples to create calibration and validation data sets with which to build and validate the models. Parameters such as the root mean square error of calibration (RMSEC), root mean square error of prediction (RMSEP) and correlation coefficient were used to assess the accuracy and linearity of the models. Accurate PLS regression models were created for both co-crystal pairs which can be used to predict the co-crystal concentration in a powder mixture of the co-crystal and the active pharmaceutical ingredient (API). The IBU-NIC model had smaller errors than the CBZ-NIC model, possibly due to the complex CBZ-NIC spectra which could reflect the different arrangement of hydrogen bonding associated with the co-crystal compared to the IBU-NIC co-crystal. These results suggest that NIR spectroscopy can be used as a PAT tool during a variety of pharmaceutical co-crystal manufacturing methods and the presented data will facilitate future offline and in-line NIR studies involving pharmaceutical co-crystals. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Modeling and forecasting of KLCI weekly return using WT-ANN integrated model
NASA Astrophysics Data System (ADS)
Liew, Wei-Thong; Liong, Choong-Yeun; Hussain, Saiful Izzuan; Isa, Zaidi
2013-04-01
The forecasting of weekly return is one of the most challenging tasks in investment since the time series are volatile and non-stationary. In this study, an integrated model of wavelet transform and artificial neural network, WT-ANN is studied for modeling and forecasting of KLCI weekly return. First, the WT is applied to decompose the weekly return time series in order to eliminate noise. Then, a mathematical model of the time series is constructed using the ANN. The performance of the suggested model will be evaluated by root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE). The result shows that the WT-ANN model can be considered as a feasible and powerful model for time series modeling and prediction.
[Application of wavelet neural networks model to forecast incidence of syphilis].
Zhou, Xian-Feng; Feng, Zi-Jian; Yang, Wei-Zhong; Li, Xiao-Song
2011-07-01
To apply Wavelet Neural Networks (WNN) model to forecast incidence of Syphilis. Back Propagation Neural Network (BPNN) and WNN were developed based on the monthly incidence of Syphilis in Sichuan province from 2004 to 2008. The accuracy of forecast was compared between the two models. In the training approximation, the mean absolute error (MAE), rooted mean square error (RMSE) and mean absolute percentage error (MAPE) were 0.0719, 0.0862 and 11.52% respectively for WNN, and 0.0892, 0.1183 and 14.87% respectively for BPNN. The three indexes for generalization of models were 0.0497, 0.0513 and 4.60% for WNN, and 0.0816, 0.1119 and 7.25% for BPNN. WNN is a better model for short-term forecasting of Syphilis.
Flügge, Tabea V; Schlager, Stefan; Nelson, Katja; Nahles, Susanne; Metzger, Marc C
2013-09-01
Digital impression devices are used alternatively to conventional impression techniques and materials. The aims of this study were to evaluate the precision of digital intraoral scanning under clinical conditions (iTero; Align Technologies, San Jose, Calif) and to compare it with the precision of extraoral digitization. One patient received 10 full-arch intraoral scans with the iTero and conventional impressions with a polyether impression material (Impregum Penta; 3M ESPE, Seefeld, Germany). Stone cast models manufactured from the impressions were digitized 10 times with an extraoral scanner (D250; 3Shape, Copenhagen, Denmark) and 10 times with the iTero. Virtual models provided by each method were roughly aligned, and the model edges were trimmed with cutting planes to create common borders (Rapidform XOR; Inus Technologies, Seoul, Korea). A second model alignment was then performed along the closest distances of the surfaces (Artec Studio software; Artec Group, Luxembourg, Luxembourg). To assess precision, deviations between corresponding models were compared. Repeated intraoral scanning was evaluated in group 1, repeated extraoral model scanning with the iTero was assessed in group 2, and repeated model scanning with the D250 was assessed in group 3. Deviations between models were measured and expressed as maximums, means, medians, and root mean square errors for quantitative analysis. Color-coded displays of the deviations allowed qualitative visualization of the deviations. The greatest deviations and therefore the lowest precision were in group 1, with mean deviations of 50 μm, median deviations of 37 μm, and root mean square errors of 73 μm. Group 2 showed a higher precision, with mean deviations of 25 μm, median deviations of 18 μm, and root mean square errors of 51 μm. Scanning with the D250 had the highest precision, with mean deviations of 10 μm, median deviations of 5 μm, and root mean square errors of 20 μm. Intraoral and extraoral scanning with the iTero resulted in deviations at the facial surfaces of the anterior teeth and the buccal molar surfaces. Scanning with the iTero is less accurate than scanning with the D250. Intraoral scanning with the iTero is less accurate than model scanning with the iTero, suggesting that the intraoral conditions (saliva, limited spacing) contribute to the inaccuracy of a scan. For treatment planning and manufacturing of tooth-supported appliances, virtual models created with the iTero can be used. An extended scanning protocol could improve the scanning results in some regions. Copyright © 2013 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
NASA Astrophysics Data System (ADS)
Gentilucci, Matteo
2017-04-01
The end of flowering date (BBCH 69) is an important phenological stage for grapevine (Vitis Vinifera L.), in fact up to this date the growth is focused on the plant and gradually passes on the berries through fruit set. The aim of this study is to perform a model to predict the date of the end of flowering (BBCH69) for some grapevine varieties. This research carried out using three cultivars of grapevine (Maceratino, Montepulciano, Sangiovese) in three different locations (Macerata, Morrovalle and Potenza Picena), places of an equal number of wine farms for the time interval between 2006 and 2013. In order to have reliable temperatures for each location, the data of 6 weather stations near these farms have been interpolated using cokriging methods with elevation as independent variable. The procedure to predict the end of flowering date starts with an investigation of cardinal temperatures typical of each grapevine cultivar. In fact the analysis is characterized by four temperature thresholds (cardinals): minimum activity temperature (TCmin = below this temperature there is no growth for the plant), lower optimal temperature (TLopt = above this temperature there is maximum growth), upper optimal temperature (TUopt = below this temperature there is maximum growth) and maximum activity temperature (TC max = above this temperature there is no growth). Thus this model take into consideration maximum, mean and minimum daily temperatures of each location, relating them with the four above mentioned cultivar temperature thresholds. In this way it has been obtained some possible cases (32) corresponding to as many equations, depending on the position of temperatures compared with the thresholds, in order to calculate the amount of growing degree units (GDU) for each day. Several iterative tests (about 1000 for each cultivar) have been performed, changing the values of temperature thresholds and GDU in order to find the best possible combination which minimizes error between observed and predicted days from budburst to end of flowering. It has been assessed the minimization of error for the predicted dates compared with real ones, calculating some statistical indexes as root mean square error, mean absolute error and coefficient of variation. The procedure led to the identification of four cardinal temperatures and the amount of GDU for each cultivar between BBCH01 (budburst) and BBCH69 (end of flowering). In conclusion, this research has achieved some goals such as the plant response to temperature (same cardinal temperatures for Maceratino and Sangiovese but higher ones for Montepulciano), the interval ranging of growing degree units (from 35 to 38) and the differences between observed and predicted days (ranged from 2 to 3.5), for each grape varieties.
Skinner, Kenneth D.
2011-01-01
High-quality elevation data in riverine environments are important for fisheries management applications and the accuracy of such data needs to be determined for its proper application. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging)-or EAARL-system was used to obtain topographic and bathymetric data along the Deadwood and South Fork Boise Rivers in west-central Idaho. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL surveys, real-time kinematic global positioning system surveys were made in three areas along each of the rivers to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived raster elevation values, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.134 to 0.347 m. Accuracies in the elevation values for the stream hydrogeomorphic settings had root mean square errors ranging from 0.251 to 0.782 m. The greater root mean square errors for the latter data are the result of complex hydrogeomorphic environments within the streams, such as submerged aquatic macrophytes and air bubble entrainment; and those along the banks, such as boulders, woody debris, and steep slopes. These complex environments reduce the accuracy of EAARL bathymetric and topographic measurements. Steep banks emphasize the horizontal location discrepancies between the EAARL and ground-survey data and may not be good representations of vertical accuracy. The EAARL point to ground-survey comparisons produced results with slightly higher but similar root mean square errors than those for the EAARL raster to ground-survey comparisons, emphasizing the minimized horizontal offset by using interpolated values from the raster dataset at the exact location of the ground-survey point as opposed to an actual EAARL point within a 1-meter distance. The average error for the wetted stream channel surface areas was -0.5 percent, while the average error for the wetted stream channel volume was -8.3 percent. The volume of the wetted river channel was underestimated by an average of 31 percent in half of the survey areas, and overestimated by an average of 14 percent in the remainder of the survey areas. The EAARL system is an efficient way to obtain topographic and bathymetric data in large areas of remote terrain. The elevation accuracy of the EAARL system varies throughout the area depending upon the hydrogeomorphic setting, preventing the use of a single accuracy value to describe the EAARL system. The elevation accuracy variations should be kept in mind when using the data, such as for hydraulic modeling or aquatic habitat assessments.
Low-flow characteristics of streams in Virginia
Hayes, Donald C.
1991-01-01
Streamflow data were collected and low-flow characteristics computed for 715 gaged sites in Virginia Annual minimum average 7-consecutive-day flows range from 0 to 2,195 cubic feet per second for a 2-year recurrence interval and from 0 to 1,423 cubic feet per second for a 10-year recurrence interval. Drainage areas range from 0.17 to 7,320 square miles. Existing and discontinued gaged sites are separated into three types: long-term continuous-record sites, short-term continuous-record sites, and partial-record sites. Low-flow characteristics for long-term continuous-record sites are determined from frequency curves of annual minimum average 7-consecutive-day flows . Low-flow characteristics for short-term continuous-record sites are estimated by relating daily mean base-flow discharge values at a short-term site to concurrent daily mean discharge values at nearby long-term continuous-record sites having similar basin characteristics . Low-flow characteristics for partial-record sites are estimated by relating base-flow measurements to daily mean discharge values at long-term continuous-record sites. Information from the continuous-record sites and partial-record sites in Virginia are used to develop two techniques for estimating low-flow characteristics at ungaged sites. A flow-routing method is developed to estimate low-flow values at ungaged sites on gaged streams. Regional regression equations are developed for estimating low-flow values at ungaged sites on ungaged streams. The flow-routing method consists of transferring low-flow characteristics from a gaged site, either upstream or downstream, to a desired ungaged site. A simple drainage-area proration is used to transfer values when there are no major tributaries between the gaged and ungaged sites. Standard errors of estimate for108 test sites are 19 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 52 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval . A more complex transfer method must be used when major tributaries enter the stream between the gaged and ungaged sites. Twenty-four stream networks are analyzed, and predictions are made for 84 sites. Standard errors of estimate are 15 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 22 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval. Regional regression equations were developed for estimating low-flow values at ungaged sites on ungaged streams. The State was divided into eight regions on the basis of physiography and geographic grouping of the residuals computed in regression analyses . Basin characteristics that were significant in the regression analysis were drainage area, rock type, and strip-mined area. Standard errors of prediction range from 60 to139 percent for estimates of low-flow characteristics having a 2-year recurrence interval and 90 percent to 172 percent for estimates of low-flow characteristics having a 10-year recurrence interval.
Optical pattern recognition architecture implementing the mean-square error correlation algorithm
Molley, Perry A.
1991-01-01
An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
NASA Astrophysics Data System (ADS)
Zhang, Yu-ying; Wang, Meng-jie; Chang, Chun-ran; Xu, Kang-zhen; Ma, Hai-xia; Zhao, Feng-qi
2018-05-01
The standard thermite reaction enthalpies (ΔrHmθ) for seven metal oxides were theoretically analyzed using density functional theory (DFT) under five different functional levels, and the results were compared with experimental values. Through the comparison of the linear fitting constants, mean error and root mean square error, the Perdew-Wang functional within the framework of local density approximation (LDA-PWC) and Perdew-Burke-Ernzerhof exchange-correlation functional within the framework of generalized gradient approximation (GGA-PBE) were selected to further calculate the thermite reaction enthalpies for metal composite oxides (MCOs). According to the Kirchhoff formula, the standard molar reaction enthalpies for these MCOs were obtained and their standard molar enthalpies of formation (ΔfHmθ) were finally calculated. The results indicated that GGA-PBE is the most suitable one out of the total five methods to calculate these oxides. Tungstate crystals present the maximum deviation of the enthalpies of thermite reactions for MCOs and these of their physical metal oxide mixtures, but ferrite crystals are the minimum. The correlation coefficients are all above 0.95, meaning linear fitting results are very precise. And the molar enthalpies of formation for NiMoO4, CuMoO4, PbZrO3 (Pm/3m), PbZrO3 (PBA2), PbZrO3 (PBam), MgZrO3, CdZrO3, MnZrO3, CuWO4 and Fe2WO6 were first obtained as -1078.75, -1058.45, -1343.87, -1266.54, -1342.29, -1333.03, -1210.43, -1388.05, -1131.07 and - 1860.11 kJ·mol-1, respectively.
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Estimating random errors due to shot noise in backscatter lidar observations.
Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang
2006-06-20
We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.
Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations
NASA Technical Reports Server (NTRS)
Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang
2006-01-01
In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:
Choosing the Number of Clusters in K-Means Clustering
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2011-01-01
Steinley (2007) provided a lower bound for the sum-of-squares error criterion function used in K-means clustering. In this article, on the basis of the lower bound, the authors propose a method to distinguish between 1 cluster (i.e., a single distribution) versus more than 1 cluster. Additionally, conditional on indicating there are multiple…
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.
Eliciting Naturalistic Cortical Responses with a Sensory Prosthesis via Optimized Microstimulation
2016-08-12
error and correlation as metrics amenable to highly efficient convex optimization. This study concentrates on characterizing the neural responses to both...spiking signal. For LFP, distance measures such as the traditional mean-squared error and cross- correlation can be used, whereas distances between spike...with parameters that describe their associated temporal dynamics and relations to the observed output. A description of the model follows, but we
Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Zhang, Li-jie
2017-10-01
Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
NASA Astrophysics Data System (ADS)
Ali, Mumtaz; Deo, Ravinesh C.; Downs, Nathan J.; Maraseni, Tek
2018-07-01
Forecasting drought by means of the World Meteorological Organization-approved Standardized Precipitation Index (SPI) is considered to be a fundamental task to support socio-economic initiatives and effectively mitigating the climate-risk. This study aims to develop a robust drought modelling strategy to forecast multi-scalar SPI in drought-rich regions of Pakistan where statistically significant lagged combinations of antecedent SPI are used to forecast future SPI. With ensemble-Adaptive Neuro Fuzzy Inference System ('ensemble-ANFIS') executed via a 10-fold cross-validation procedure, a model is constructed by randomly partitioned input-target data. Resulting in 10-member ensemble-ANFIS outputs, judged by mean square error and correlation coefficient in the training period, the optimal forecasts are attained by the averaged simulations, and the model is benchmarked with M5 Model Tree and Minimax Probability Machine Regression (MPMR). The results show the proposed ensemble-ANFIS model's preciseness was notably better (in terms of the root mean square and mean absolute error including the Willmott's, Nash-Sutcliffe and Legates McCabe's index) for the 6- and 12- month compared to the 3-month forecasts as verified by the largest error proportions that registered in smallest error band. Applying 10-member simulations, ensemble-ANFIS model was validated for its ability to forecast severity (S), duration (D) and intensity (I) of drought (including the error bound). This enabled uncertainty between multi-models to be rationalized more efficiently, leading to a reduction in forecast error caused by stochasticity in drought behaviours. Through cross-validations at diverse sites, a geographic signature in modelled uncertainties was also calculated. Considering the superiority of ensemble-ANFIS approach and its ability to generate uncertainty-based information, the study advocates the versatility of a multi-model approach for drought-risk forecasting and its prime importance for estimating drought properties over confidence intervals to generate better information for strategic decision-making.
Automated body weight prediction of dairy cows using 3-dimensional vision.
Song, X; Bokkers, E A M; van der Tol, P P J; Groot Koerkamp, P W G; van Mourik, S
2018-05-01
The objectives of this study were to quantify the error of body weight prediction using automatically measured morphological traits in a 3-dimensional (3-D) vision system and to assess the influence of various sources of uncertainty on body weight prediction. In this case study, an image acquisition setup was created in a cow selection box equipped with a top-view 3-D camera. Morphological traits of hip height, hip width, and rump length were automatically extracted from the raw 3-D images taken of the rump area of dairy cows (n = 30). These traits combined with days in milk, age, and parity were used in multiple linear regression models to predict body weight. To find the best prediction model, an exhaustive feature selection algorithm was used to build intermediate models (n = 63). Each model was validated by leave-one-out cross-validation, giving the root mean square error and mean absolute percentage error. The model consisting of hip width (measurement variability of 0.006 m), days in milk, and parity was the best model, with the lowest errors of 41.2 kg of root mean square error and 5.2% mean absolute percentage error. Our integrated system, including the image acquisition setup, image analysis, and the best prediction model, predicted the body weights with a performance similar to that achieved using semi-automated or manual methods. Moreover, the variability of our simplified morphological trait measurement showed a negligible contribution to the uncertainty of body weight prediction. We suggest that dairy cow body weight prediction can be improved by incorporating more predictive morphological traits and by improving the prediction model structure. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
Digital model of the Arikaree Aquifer near Wheatland, southeastern Wyoming
Hoxie, Dwight T.
1977-01-01
A digital model that mathematically simulates the flow of ground water, approximating the flow system as two-dimensional, has been applied to predict the long-term effects of irrigation and proposed industrial pumping from the unconfined Arikaree aquifer in a 400 square-mile area in southeastern Wyoming. Three cases that represent projected maximum, mean, and minimum combined irrigation and industrial ground-water withdrawals at annual rates of 16,176, 11,168, and 6,749 acre-feet, respectively, were considered. Water-level declines of more than 5 feet over areas of 124, 120, and 98 square miles and depletions in streamflow of 14.4, 8.9, and 7.2 cfs from the Laramie and North Laramie Rivers were predicted to occur at the end of a 40-year simulation period for these maximum, mean, and minimum withdrawal rates, respectively. A tenfold incrase in the vertical hydraulic conductivity that was assumed for the streambeds results in smaller predicted drawdowns near the Laramie and North Laramie Rivers and a 36 percent increase in the predicted depletion in streamflow for the North Laramie River. (Woodard-USGS)
Structure and NMR spectra of some [2.2]paracyclophanes. The dilemma of [2.2]paracyclophane symmetry.
Dodziuk, Helena; Szymański, Sławomir; Jaźwiński, Jarosław; Ostrowski, Maciej; Demissie, Taye Beyene; Ruud, Kenneth; Kuś, Piotr; Hopf, Henning; Lin, Shaw-Tao
2011-09-29
Density functional theory (DFT) quantum chemical calculations of the structure and NMR parameters for highly strained hydrocarbon [2.2]paracyclophane 1 and its three derivatives are presented. The calculated NMR parameters are compared with the experimental ones. By least-squares fitting of the (1)H spectra, almost all J(HH) coupling constants could be obtained with high accuracy. Theoretical vicinal J(HH) couplings in the aliphatic bridges, calculated using different basis sets (6-311G(d,p), and Huz-IV) reproduce the experimental values with essentially the same root-mean-square (rms) error of about 1.3 Hz, regardless of the basis set used. These discrepancies could be in part due to a considerable impact of rovibrational effects on the observed J(HH) couplings, since the latter show a measurable dependence on temperature. Because of the lasting literature controversies concerning the symmetry of parent compound 1, D(2h) versus D(2), a critical analysis of the relevant literature data is carried out. The symmetry issue is prone to confusion because, according to some literature claims, the two hypothetical enantiomeric D(2) structures of 1 could be separated by a very low energy barrier that would explain the occurrence of rovibrational effects on the observed vicinal J(HH) couplings. However, the D(2h) symmetry of 1 with a flat energy minimum could also account for these effects.
Kriging: Understanding allays intimidation
Olea, R.A.
1996-01-01
In 1938 Daniel Gerhardus "Danie" Krige obtained an undergraduate degree in mining engineering and started a brilliant career centered on analyzing the gold and uranium mines in the Witwatersrand conglomerates of South Africa. He became interested in the disharmony between the poor reliability of reserve estimation reports and the magnitude of the economic decisions that were based on these studies. Back at the University of Witwatersrand, he wrote a master's thesis that began a revolution in mining evaluation methods. Krige was not alone in his research. Another mining engineer, Georges Matheron, a Frenchman, thought space data analysis belonged in a separate discipline, just as geophysics is a separate branch from physics. He named the new field geostatistics. Kriging is the name given in geostatistics to a collection of generalized linear regression techniques for the estimation of spatial phenomena. Pierre Carlier, another Frenchman, coined the term krigeage in the late 1950s to honor Krige's seminal work. Matheron anglicized the term to kriging when he published a paper for English-speaking readers. France dominated the development and application of geostatistics for several years. However, geostatistics in general, and kriging in particular, are employed by few and are regarded with apprehension by many. One of the possible applications of kriging is in computer mapping. Computer contouring methods can be grouped into two families: triangulation and gridding. The former is a direct procedure in which the contour lines are computed straight from the data by partitioning the sampling area into triangles with one observation per vertex. Kriging belongs in the gridding family. A grid is a regular arrangement of locations or nodes. In the gridding method the isolines are determined from interpolated values at the nodes. The difference between kriging and other weighting methods is in the calculation of the weights. Even for the simplest form of kriging, the calculations are more demanding. The kriging system of equations differs from classical regression in that the observations are allowed to be correlated and that neither the estimate nor the observations are necessarily points - they may have a volume, shape, and orientation. The mean square error is the average of the squares of the differences between the true and the estimated values. Simple kriging, the most basic form of kriging in that the system of equations has the fewest terms, requires the phenomena to have a constant and known mean. The next step up, ordinary kriging, does not require knowledge of the population mean. The external drift method, universal kriging, and intrinsic kriging go even further by allowing fluctuations in the mean. In practice, estimation by kriging is not as difficult to handle as it may look at first glance. In these days of high technology, all the details in the procedure are coded into computer programs. When properly used, kriging has several appealing attributes, the most important being that it does the work more accurately. By design, kriging provides the weights that result in the minimum mean square error. And yes, there have been people who have tested its superiority with real data. Practice has consistently confirmed theory. Kriging is also robust. Within reasonable limits, kriging tends to persist in yielding correct estimates even when the user selects the wrong model, misspecifies parameters, or both. This property should be an incentive for the novice to try the method. Gross misuse of kriging, though, can lead to poor results, worse even than those produced by alternative methods. Kriging has evolved and continues to expand to accommodate the estimation of increasingly demanding realities. Conclusions Theory and practice show that computer contour maps generated using kriging have the least mean square estimation error. In addition, the method provides information to assess the reliability of the maps.
Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains
NASA Technical Reports Server (NTRS)
Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang
2013-01-01
Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.
Liu, Zifeng; Yuan, Lianxiong; Huang, Yixiang; Zhang, Lingling; Luo, Futian
2016-01-01
Objective We aimed to develop a questionnaire for quantitative evaluation of the autonomy of public hospitals in China. Method An extensive literature review was conducted to select possible items for inclusion in the questionnaire, which was then reviewed by 5 experts. After a two-round Delphi method, we distributed the questionnaire to 404 secondary and tertiary hospitals in Guangdong Province, China, and 379 completed questionnaires were collected. The final questionnaire was then developed on the basis of the results of exploratory and confirmatory factor analysis. Results Analysis suggested that all internal consistency reliabilities exceeded the minimum reliability standard of 0.70 for the α coefficient. The overall scale coefficient was 0.87, and 6 subscale coefficients were 0.92 (strategic management), 0.81 (budget and expenditure), 0.85 (financing), 0.75 (financing, medical management), 0.86 (human resources) and 0.86 (accountability). Correlation coefficients between and among items and their hypothesised subscales were higher than those with other subscales. The value of average variance extracted (AVE) was higher than 0.5, the value of construct reliability (CR) was higher than 0.7, and the square roots of the AVE of each subscale were larger than the correlation of the specific subscale with the other subscales, supporting the convergent and discriminant validity of the Chinese version of the Hospital Autonomy Questionnaire (CVHAQ). The model fit indices were all acceptable: χ2/df=1.73, Goodness of Fit Index (GFI) = 0.93, Adjusted Goodness of Fit Index (AGFI) = 0.91, Non-Normed Fit Index (NNFI) = 0.96, Comparative Fit Index (CFI) = 0.97, Root Mean Square Error of Approximation (RMSEA) = 0.04, Standardised Root Mean Square Residual (SRMR) = 0.07. Conclusions This study demonstrated the reliability and validity of a CVHAQ and provides a quantitative method for the assessment of hospital autonomy. PMID:26911587
Anandakrishnan, Ramu; Onufriev, Alexey
2008-03-01
In statistical mechanics, the equilibrium properties of a physical system of particles can be calculated as the statistical average over accessible microstates of the system. In general, these calculations are computationally intractable since they involve summations over an exponentially large number of microstates. Clustering algorithms are one of the methods used to numerically approximate these sums. The most basic clustering algorithms first sub-divide the system into a set of smaller subsets (clusters). Then, interactions between particles within each cluster are treated exactly, while all interactions between different clusters are ignored. These smaller clusters have far fewer microstates, making the summation over these microstates, tractable. These algorithms have been previously used for biomolecular computations, but remain relatively unexplored in this context. Presented here, is a theoretical analysis of the error and computational complexity for the two most basic clustering algorithms that were previously applied in the context of biomolecular electrostatics. We derive a tight, computationally inexpensive, error bound for the equilibrium state of a particle computed via these clustering algorithms. For some practical applications, it is the root mean square error, which can be significantly lower than the error bound, that may be more important. We how that there is a strong empirical relationship between error bound and root mean square error, suggesting that the error bound could be used as a computationally inexpensive metric for predicting the accuracy of clustering algorithms for practical applications. An example of error analysis for such an application-computation of average charge of ionizable amino-acids in proteins-is given, demonstrating that the clustering algorithm can be accurate enough for practical purposes.
Zhang, Bing-Fang; Yuan, Li-Bo; Kong, Qing-Ming; Shen, Wei-Zheng; Zhang, Bing-Xiu; Liu, Cheng-Hai
2014-10-01
In the present study, a new method using near infrared spectroscopy combined with optical fiber sensing technology was applied to the analysis of hogwash oil in blended oil. The 50 samples were a blend of frying oil and "nine three" soybean oil according to a certain volume ratio. The near infrared transmission spectroscopies were collected and the quantitative analysis model of frying oil was established by partial least squares (PLS) and BP artificial neural network The coefficients of determina- tion of calibration sets were 0.908 and 0.934 respectively. The coefficients of determination of validation sets were 0.961 and 0.952, the root mean square error of calibrations (RMSEC) was 0.184 and 0.136, and the root mean square error of predictions (RMSEP) was all 0.111 6. They conform to the model application requirement. At the same time, frying oil and qualified edible oil were identified with the principal component analysis (PCA), and the accurate rate was 100%. The experiment proved that near infrared spectral technology not only can quickly and accurately identify hogwash oil, but also can quantitatively detect hog- wash oil. This method has a wide application prospect in the detection of oil.
Noncontact analysis of the fiber weight per unit area in prepreg by near-infrared spectroscopy.
Jiang, B; Huang, Y D
2008-05-26
The fiber weight per unit area in prepreg is an important factor to ensure the quality of the composite products. Near-infrared spectroscopy (NIRS) technology together with a noncontact reflectance sources has been applied for quality analysis of the fiber weight per unit area. The range of the unit area fiber weight was 13.39-14.14mgcm(-2). The regression method was employed by partial least squares (PLS) and principal components regression (PCR). The calibration model was developed by 55 samples to determine the fiber weight per unit area in prepreg. The determination coefficient (R(2)), root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) were 0.82, 0.092, 0.099, respectively. The predicted values of the fiber weight per unit area in prepreg measured by NIRS technology were comparable to the values obtained by the reference method. For this technology, the noncontact reflectance sources focused directly on the sample with neither previous treatment nor manipulation. The results of the paired t-test revealed that there was no significant difference between the NIR method and the reference method. Besides, the prepreg could be analyzed one time within 20s without sample destruction.
Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy.
Liu, Yan-De; Ying, Yi-Bin; Fu, Xia-Ping
2005-03-01
To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way.
Prediction of valid acidity in intact apples with Fourier transform near infrared spectroscopy*
Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping
2005-01-01
To develop nondestructive acidity prediction for intact Fuji apples, the potential of Fourier transform near infrared (FT-NIR) method with fiber optics in interactance mode was investigated. Interactance in the 800 nm to 2619 nm region was measured for intact apples, harvested from early to late maturity stages. Spectral data were analyzed by two multivariate calibration techniques including partial least squares (PLS) and principal component regression (PCR) methods. A total of 120 Fuji apples were tested and 80 of them were used to form a calibration data set. The influences of different data preprocessing and spectra treatments were also quantified. Calibration models based on smoothing spectra were slightly worse than that based on derivative spectra, and the best result was obtained when the segment length was 5 nm and the gap size was 10 points. Depending on data preprocessing and PLS method, the best prediction model yielded correlation coefficient of determination (r 2) of 0.759, low root mean square error of prediction (RMSEP) of 0.0677, low root mean square error of calibration (RMSEC) of 0.0562. The results indicated the feasibility of FT-NIR spectral analysis for predicting apple valid acidity in a nondestructive way. PMID:15682498
NASA Astrophysics Data System (ADS)
Mahmood, Rashid; JIA, Shaofeng
2017-11-01
In this study, the linear scaling method used for the downscaling of temperature was extended from monthly scaling factors to daily scaling factors (SFs) to improve the daily variations in the corrected temperature. In the original linear scaling (OLS), mean monthly SFs are used to correct the future data, but mean daily SFs are used to correct the future data in the extended linear scaling (ELS) method. The proposed method was evaluated in the Jhelum River basin for the period 1986-2000, using the observed maximum temperature (Tmax) and minimum temperature (Tmin) of 18 climate stations and the simulated Tmax and Tmin of five global climate models (GCMs) (GFDL-ESM2G, NorESM1-ME, HadGEM2-ES, MIROC5, and CanESM2), and the method was also compared with OLS to observe the improvement. Before the evaluation of ELS, these GCMs were also evaluated using their raw data against the observed data for the same period (1986-2000). Four statistical indicators, i.e., error in mean, error in standard deviation, root mean square error, and correlation coefficient, were used for the evaluation process. The evaluation results with GCMs' raw data showed that GFDL-ESM2G and MIROC5 performed better than other GCMs according to all the indicators but with unsatisfactory results that confine their direct application in the basin. Nevertheless, after the correction with ELS, a noticeable improvement was observed in all the indicators except correlation coefficient because this method only adjusts (corrects) the magnitude. It was also noticed that the daily variations of the observed data were better captured by the corrected data with ELS than OLS. Finally, the ELS method was applied for the downscaling of five GCMs' Tmax and Tmin for the period of 2041-2070 under RCP8.5 in the Jhelum basin. The results showed that the basin would face hotter climate in the future relative to the present climate, which may result in increasing water requirements in public, industrial, and agriculture sectors; change in the hydrological cycle and monsoon pattern; and lack of glaciers in the basin.
[Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].
Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang
2016-07-12
To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.
Flood loss model transfer: on the value of additional data
NASA Astrophysics Data System (ADS)
Schröter, Kai; Lüdtke, Stefan; Vogel, Kristin; Kreibich, Heidi; Thieken, Annegret; Merz, Bruno
2017-04-01
The transfer of models across geographical regions and flood events is a key challenge in flood loss estimation. Variations in local characteristics and continuous system changes require regional adjustments and continuous updating with current evidence. However, acquiring data on damage influencing factors is expensive and therefore assessing the value of additional data in terms of model reliability and performance improvement is of high relevance. The present study utilizes empirical flood loss data on direct damage to residential buildings available from computer aided telephone interviews that were carried out after the floods in 2002, 2005, 2006, 2010, 2011 and 2013 mainly in the Elbe and Danube catchments in Germany. Flood loss model performance is assessed for incrementally increased numbers of loss data which are differentiated according to region and flood event. Two flood loss modeling approaches are considered: (i) a multi-variable flood loss model approach using Random Forests and (ii) a uni-variable stage damage function. Both model approaches are embedded in a bootstrapping process which allows evaluating the uncertainty of model predictions. Predictive performance of both models is evaluated with regard to mean bias, mean absolute and mean squared errors, as well as hit rate and sharpness. Mean bias and mean absolute error give information about the accuracy of model predictions; mean squared error and sharpness about precision and hit rate is an indicator for model reliability. The results of incremental, regional and temporal updating demonstrate the usefulness of additional data to improve model predictive performance and increase model reliability, particularly in a spatial-temporal transfer setting.
Modeling number of claims and prediction of total claim amount
NASA Astrophysics Data System (ADS)
Acar, Aslıhan Şentürk; Karabey, Uǧur
2017-07-01
In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.
NASA Astrophysics Data System (ADS)
Shastri, Niket; Pathak, Kamlesh
2018-05-01
The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.
Smith, Erik A.; Kiesling, Richard L.; Ziegeweid, Jeffrey R.
2017-07-20
Fish habitat can degrade in many lakes due to summer blue-green algal blooms. Predictive models are needed to better manage and mitigate loss of fish habitat due to these changes. The U.S. Geological Survey (USGS), in cooperation with the Minnesota Department of Natural Resources, developed predictive water-quality models for two agricultural land-use dominated lakes in Minnesota—Madison Lake and Pearl Lake, which are part of Minnesota’s sentinel lakes monitoring program—to assess algal community dynamics, water quality, and fish habitat suitability of these two lakes under recent (2014) meteorological conditions. The interaction of basin processes to these two lakes, through the delivery of nutrient loads, were simulated using CE-QUAL-W2, a carbon-based, laterally averaged, two-dimensional water-quality model that predicts distribution of temperature and oxygen from interactions between nutrient cycling, primary production, and trophic dynamics.The CE-QUAL-W2 models successfully predicted water temperature and dissolved oxygen on the basis of the two metrics of mean absolute error and root mean square error. For Madison Lake, the mean absolute error and root mean square error were 0.53 and 0.68 degree Celsius, respectively, for the vertical temperature profile comparisons; for Pearl Lake, the mean absolute error and root mean square error were 0.71 and 0.95 degree Celsius, respectively, for the vertical temperature profile comparisons. Temperature and dissolved oxygen were key metrics for calibration targets. These calibrated lake models also simulated algal community dynamics and water quality. The model simulations presented potential explanations for persistently large total phosphorus concentrations in Madison Lake, key differences in nutrient concentrations between these lakes, and summer blue-green algal bloom persistence.Fish habitat suitability simulations for cool-water and warm-water fish indicated that, in general, both lakes contained a large proportion of good-growth habitat and a sustained period of optimal growth habitat in the summer, without any periods of lethal oxythermal habitat. For Madison and Pearl Lakes, examples of important cool-water fish, particularly game fish, include northern pike (Esox lucius), walleye (Sander vitreus), and black crappie (Pomoxis nigromaculatus); examples of important warm-water fish include bluegill (Lepomis macrochirus), largemouth bass (Micropterus salmoides), and smallmouth bass (Micropterus dolomieu). Sensitivity analyses were completed to understand lake response effects through the use of controlled departures on certain calibrated model parameters and input nutrient loads. These sensitivity analyses also operated as land-use change scenarios because alterations in agricultural practices, for example, could potentially increase or decrease nutrient loads.
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
New from the Old - Measuring Coastal Cliff Change with Historical Oblique Aerial Photos
NASA Astrophysics Data System (ADS)
Warrick, J. A.; Ritchie, A.
2016-12-01
Oblique aerial photographs are commonly collected to document coastal landscapes. Here we show that these historical photographs can be used to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques if adequate photo-to-photo overlaps exist. Focusing on the 60-m high cliffs of Fort Funston, California, photographs from the California Coastal Records Project were combined with ground control points to develop topographic point clouds of the study area for five years between 2002 and 2010. Uncertainties in the results were assessed by comparing SfM-derived point clouds with airborne lidar data, and the differences between these data were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points the root mean squared error between the SfM and lidar data was less than 0.3 m (minimum = 0.18 m) and the mean systematic error was consistently less than 0.10 m. Because of the oblique orientation of the imagery, the SfM-derived point clouds provided coverage on vertical to overhanging portions of the cliff, and point densities from the SfM techniques averaged between 17 and 161 points/m2 on the cliff face. The time-series of topographic point clouds revealed many topographic changes, including landslides, rockfalls and the erosion of landslide talus along the Fort Funston beach. Thus, we concluded that historical oblique photographs, such as those generated by the California Coastal Records Project, can provide useful tools for mapping coastal topography and measuring coastal change.
Warrick, Jonathan; Ritchie, Andy; Adelman, Gabrielle; Adelman, Ken; Limber, Patrick W.
2017-01-01
Oblique aerial photograph surveys are commonly used to document coastal landscapes. Here it is shown that adequate overlap may exist in these photographic records to develop topographic models with Structure-from-Motion (SfM) photogrammetric techniques. Using photographs of Fort Funston, California, from the California Coastal Records Project, imagery were combined with ground control points in a four-dimensional analysis that produced topographic point clouds of the study area’s cliffs for 5 years spanning 2002 to 2010. Uncertainty was assessed by comparing point clouds with airborne LIDAR data, and these uncertainties were related to the number and spatial distribution of ground control points used in the SfM analyses. With six or more ground control points, the root mean squared errors between the SfM and LIDAR data were less than 0.30 m (minimum 1⁄4 0.18 m), and the mean systematic error was less than 0.10 m. The SfM results had several benefits over traditional airborne LIDAR in that they included point coverage on vertical- to-overhanging sections of the cliff and resulted in 10–100 times greater point densities. Time series of the SfM results revealed topographic changes, including landslides, rock falls, and the erosion of landslide talus along the Fort Funston beach. Thus, it was concluded that SfM photogrammetric techniques with historical oblique photographs allow for the extraction of useful quantitative information for mapping coastal topography and measuring coastal change. The new techniques presented here are likely applicable to many photograph collections and problems in the earth sciences.
Assessing the accuracy of ANFIS, EEMD-GRNN, PCR, and MLR models in predicting PM2.5
NASA Astrophysics Data System (ADS)
Ausati, Shadi; Amanollahi, Jamil
2016-10-01
Since Sanandaj is considered one of polluted cities of Iran, prediction of any type of pollution especially prediction of suspended particles of PM2.5, which are the cause of many diseases, could contribute to health of society by timely announcements and prior to increase of PM2.5. In order to predict PM2.5 concentration in the Sanandaj air the hybrid models consisting of an ensemble empirical mode decomposition and general regression neural network (EEMD-GRNN), Adaptive Neuro-Fuzzy Inference System (ANFIS), principal component regression (PCR), and linear model such as multiple liner regression (MLR) model were used. In these models the data of suspended particles of PM2.5 were the dependent variable and the data related to air quality including PM2.5, PM10, SO2, NO2, CO, O3 and meteorological data including average minimum temperature (Min T), average maximum temperature (Max T), average atmospheric pressure (AP), daily total precipitation (TP), daily relative humidity level of the air (RH) and daily wind speed (WS) for the year 2014 in Sanandaj were the independent variables. Among the used models, EEMD-GRNN model with values of R2 = 0.90, root mean square error (RMSE) = 4.9218 and mean absolute error (MAE) = 3.4644 in the training phase and with values of R2 = 0.79, RMSE = 5.0324 and MAE = 3.2565 in the testing phase, exhibited the best function in predicting this phenomenon. It can be concluded that hybrid models have accurate results to predict PM2.5 concentration compared with linear model.
Huh, S.; Dickey, D.A.; Meador, M.R.; Ruhl, K.E.
2005-01-01
A temporal analysis of the number and duration of exceedences of high- and low-flow thresholds was conducted to determine the number of years required to detect a level shift using data from Virginia, North Carolina, and South Carolina. Two methods were used - ordinary least squares assuming a known error variance and generalized least squares without a known error variance. Using ordinary least squares, the mean number of years required to detect a one standard deviation level shift in measures of low-flow variability was 57.2 (28.6 on either side of the break), compared to 40.0 years for measures of high-flow variability. These means become 57.6 and 41.6 when generalized least squares is used. No significant relations between years and elevation or drainage area were detected (P>0.05). Cluster analysis did not suggest geographic patterns in years related to physiography or major hydrologic regions. Referring to the number of observations required to detect a one standard deviation shift as 'characterizing' the variability, it appears that at least 20 years of record on either side of a shift may be necessary to adequately characterize high-flow variability. A longer streamflow record (about 30 years on either side) may be required to characterize low-flow variability. ?? 2005 Elsevier B.V. All rights reserved.
Comparison of Sleep Models for Score Fatigue Model Integration
2015-04-01
In order to obtain sleepiness, the Karolinska Sleepiness Scale (KSS) was applied using the following equation. = − ( ∗ ) (8) Where a = 10.3... Karolinska Sleepiness Scale MSE Mean Square Error St Homeostatic sleep pressure TPM Three-Process Model U Ultradian component
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
Evaluation of Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Proctor, Fred H.; Hamilton, David W.
2009-01-01
Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.
The in vivo wear resistance of 12 composite resins.
Lang, B R; Bloem, T J; Powers, J M; Wang, R F
1992-09-01
The in vivo wear resistance of 12 composite resins were compared with an amalgam control using the Latin Square experimental design. Sixteen edentulous patients wearing specially designed complete dentures formed the experimental population. The Michigan Computer Graphics Measurement System was used to digitize the surface of the control and composite resin samples before and after 3-month test periods to obtain wear data. The 12 composite resins selected for this investigation based on their published composite classification types were seven fine particle composites, three blends, and two microfilled composite resins. The Latin Square experimental design was found to be valid with the factor of material being statistically different at the 5% level of significance. Wear was computed as volume loss (mm3/mm2), and all of the composites studied had more wear than the amalgam control (P = .001). After 3 months, the mean (error) of wear of the amalgam was 0.028 (0.006). Means (error) of wear for the 12 composites were ranked from most to least wear by mean wear volume loss. The absence of any relationship between mean wear volume loss and the volume percentage filler was confirmed by the correlation coefficient r = -0.158.
Past observable dynamics of a continuously monitored qubit
NASA Astrophysics Data System (ADS)
García-Pintos, Luis Pedro; Dressel, Justin
2017-12-01
Monitoring a quantum observable continuously in time produces a stochastic measurement record that noisily tracks the observable. For a classical process, such noise may be reduced to recover an average signal by minimizing the mean squared error between the noisy record and a smooth dynamical estimate. We show that for a monitored qubit, this usual procedure returns unusual results. While the record seems centered on the expectation value of the observable during causal generation, examining the collected past record reveals that it better approximates a moving-mean Gaussian stochastic process centered at a distinct (smoothed) observable estimate. We show that this shifted mean converges to the real part of a generalized weak value in the time-continuous limit without additional postselection. We verify that this smoothed estimate minimizes the mean squared error even for individual measurement realizations. We go on to show that if a second observable is weakly monitored concurrently, then that second record is consistent with the smoothed estimate of the second observable based solely on the information contained in the first observable record. Moreover, we show that such a smoothed estimate made from incomplete information can still outperform estimates made using full knowledge of the causal quantum state.
Sando, Roy; Chase, Katherine J.
2017-03-23
A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.
NASA Astrophysics Data System (ADS)
Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu
2018-04-01
The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.
NASA Technical Reports Server (NTRS)
Pierson, W. J., Jr.
1984-01-01
Backscatter measurements at upwind and crosswind are simulated for five incidence angles by means of the SASS-1 model function. The effects of communication noise and attitude errors are simulated by Monte Carlo methods, and the winds are recovered by both the Sum of Square (SOS) algorithm and a Maximum Likelihood Estimater (MLE). The SOS algorithm is shown to fail for light enough winds at all incidence angles and to fail to show areas of calm because backscatter estimates that were negative or that produced incorrect values of K sub p greater than one were discarded. The MLE performs well for all input backscatter estimates and returns calm when both are negative. The use of the SOS algorithm is shown to have introduced errors in the SASS-1 model function that, in part, cancel out the errors that result from using it, but that also cause disagreement with other data sources such as the AAFE circle flight data at light winds. Implications for future scatterometer systems are given.
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
Improved estimation of anomalous diffusion exponents in single-particle tracking experiments
NASA Astrophysics Data System (ADS)
Kepten, Eldad; Bronshtein, Irena; Garini, Yuval
2013-05-01
The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.
Lanchon, Cecilia; Custillon, Guillaume; Moreau-Gaudry, Alexandre; Descotes, Jean-Luc; Long, Jean-Alexandre; Fiard, Gaelle; Voros, Sandrine
2016-07-01
To guide the surgeon during laparoscopic or robot-assisted radical prostatectomy an innovative laparoscopic/ultrasound fusion platform was developed using a motorized 3-dimensional transurethral ultrasound probe. We present what is to our knowledge the first preclinical evaluation of 3-dimensional prostate visualization using transurethral ultrasound and the preliminary results of this new augmented reality. The transurethral probe and laparoscopic/ultrasound registration were tested on realistic prostate phantoms made of standard polyvinyl chloride. The quality of transurethral ultrasound images and the detection of passive markers placed on the prostate surface were evaluated on 2-dimensional dynamic views and 3-dimensional reconstructions. The feasibility, precision and reproducibility of laparoscopic/transurethral ultrasound registration was then determined using 4, 5, 6 and 7 markers to assess the optimal amount needed. The root mean square error was calculated for each registration and the median root mean square error and IQR were calculated according to the number of markers. The transurethral ultrasound probe was easy to manipulate and the prostatic capsule was well visualized in 2 and 3 dimensions. Passive markers could precisely be localized in the volume. Laparoscopic/transurethral ultrasound registration procedures were performed on 74 phantoms of various sizes and shapes. All were successful. The median root mean square error of 1.1 mm (IQR 0.8-1.4) was significantly associated with the number of landmarks (p = 0.001). The highest accuracy was achieved using 6 markers. However, prostate volume did not affect registration precision. Transurethral ultrasound provided high quality prostate reconstruction and easy marker detection. Laparoscopic/ultrasound registration was successful with acceptable mm precision. Further investigations are necessary to achieve sub mm accuracy and assess feasibility in a human model. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Soni, Kirti; Parmar, Kulwinder Singh; Kapoor, Sangeeta; Kumar, Nishant
2016-05-15
A lot of studies in the literature of Aerosol Optical Depth (AOD) done by using Moderate Resolution Imaging Spectroradiometer (MODIS) derived data, but the accuracy of satellite data in comparison to ground data derived from ARrosol Robotic NETwork (AERONET) has been always questionable. So to overcome from this situation, comparative study of a comprehensive ground based and satellite data for the period of 2001-2012 is modeled. The time series model is used for the accurate prediction of AOD and statistical variability is compared to assess the performance of the model in both cases. Root mean square error (RMSE), mean absolute percentage error (MAPE), stationary R-squared, R-squared, maximum absolute percentage error (MAPE), normalized Bayesian information criterion (NBIC) and Ljung-Box methods are used to check the applicability and validity of the developed ARIMA models revealing significant precision in the model performance. It was found that, it is possible to predict the AOD by statistical modeling using time series obtained from past data of MODIS and AERONET as input data. Moreover, the result shows that MODIS data can be formed from AERONET data by adding 0.251627 ± 0.133589 and vice-versa by subtracting. From the forecast available for AODs for the next four years (2013-2017) by using the developed ARIMA model, it is concluded that the forecasted ground AOD has increased trend. Copyright © 2016 Elsevier B.V. All rights reserved.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Stevens, Antoine; Nocita, Marco; Tóth, Gergely; Montanarella, Luca; van Wesemael, Bas
2013-01-01
Soil organic carbon is a key soil property related to soil fertility, aggregate stability and the exchange of CO2 with the atmosphere. Existing soil maps and inventories can rarely be used to monitor the state and evolution in soil organic carbon content due to their poor spatial resolution, lack of consistency and high updating costs. Visible and Near Infrared diffuse reflectance spectroscopy is an alternative method to provide cheap and high-density soil data. However, there are still some uncertainties on its capacity to produce reliable predictions for areas characterized by large soil diversity. Using a large-scale EU soil survey of about 20,000 samples and covering 23 countries, we assessed the performance of reflectance spectroscopy for the prediction of soil organic carbon content. The best calibrations achieved a root mean square error ranging from 4 to 15 g C kg(-1) for mineral soils and a root mean square error of 50 g C kg(-1) for organic soil materials. Model errors are shown to be related to the levels of soil organic carbon and variations in other soil properties such as sand and clay content. Although errors are ∼5 times larger than the reproducibility error of the laboratory method, reflectance spectroscopy provides unbiased predictions of the soil organic carbon content. Such estimates could be used for assessing the mean soil organic carbon content of large geographical entities or countries. This study is a first step towards providing uniform continental-scale spectroscopic estimations of soil organic carbon, meeting an increasing demand for information on the state of the soil that can be used in biogeochemical models and the monitoring of soil degradation.
Stevens, Antoine; Nocita, Marco; Tóth, Gergely; Montanarella, Luca; van Wesemael, Bas
2013-01-01
Soil organic carbon is a key soil property related to soil fertility, aggregate stability and the exchange of CO2 with the atmosphere. Existing soil maps and inventories can rarely be used to monitor the state and evolution in soil organic carbon content due to their poor spatial resolution, lack of consistency and high updating costs. Visible and Near Infrared diffuse reflectance spectroscopy is an alternative method to provide cheap and high-density soil data. However, there are still some uncertainties on its capacity to produce reliable predictions for areas characterized by large soil diversity. Using a large-scale EU soil survey of about 20,000 samples and covering 23 countries, we assessed the performance of reflectance spectroscopy for the prediction of soil organic carbon content. The best calibrations achieved a root mean square error ranging from 4 to 15 g C kg−1 for mineral soils and a root mean square error of 50 g C kg−1 for organic soil materials. Model errors are shown to be related to the levels of soil organic carbon and variations in other soil properties such as sand and clay content. Although errors are ∼5 times larger than the reproducibility error of the laboratory method, reflectance spectroscopy provides unbiased predictions of the soil organic carbon content. Such estimates could be used for assessing the mean soil organic carbon content of large geographical entities or countries. This study is a first step towards providing uniform continental-scale spectroscopic estimations of soil organic carbon, meeting an increasing demand for information on the state of the soil that can be used in biogeochemical models and the monitoring of soil degradation. PMID:23840459
Reconstruction of regional mean temperature for East Asia since 1900s and its uncertainties
NASA Astrophysics Data System (ADS)
Hua, W.
2017-12-01
Regional average surface air temperature (SAT) is one of the key variables often used to investigate climate change. Unfortunately, because of the limited observations over East Asia, there were also some gaps in the observation data sampling for regional mean SAT analysis, which was important to estimate past climate change. In this study, the regional average temperature of East Asia since 1900s is calculated by the Empirical Orthogonal Function (EOF)-based optimal interpolation (OA) method with considering the data errors. The results show that our estimate is more precise and robust than the results from simple average, which provides a better way for past climate reconstruction. In addition to the reconstructed regional average SAT anomaly time series, we also estimated uncertainties of reconstruction. The root mean square error (RMSE) results show that the the error decreases with respect to time, and are not sufficiently large to alter the conclusions on the persist warming in East Asia during twenty-first century. Moreover, the test of influence of data error on reconstruction clearly shows the sensitivity of reconstruction to the size of the data error.
Refractive index and birefringence of 2H silicon carbide.
NASA Technical Reports Server (NTRS)
Powell, J. A.
1972-01-01
Measurement of the refractive indices of 2H SiC over the wavelength range from 435.8 to 650.9 nm by the method of minimum deviation. A curve fit of the experimental data to the Cauchy dispersion equation yielded, for the ordinary index, n sub zero = 2.5513 + 25,850/lambda squared + 8.928 x 10 to the 8th power/lambda to the 4th power and, for the extraordinary index, n sub e = 2.6161 + 28,230/lambda squared + 11.490 x 10 to the 8th power/lambda to the 4th power when lambda is expressed in nm. The estimated error (standard deviation) in these values is plus or minus 0.0006 for n sub zero and plus or minus 0.0009 for n sub e. The birefringence calculated from these expressions is about 20% less than previously published values.
Geodesy by radio interferometry - Water vapor radiometry for estimation of the wet delay
NASA Technical Reports Server (NTRS)
Elgered, G.; Davis, J. L.; Herring, T. A.; Shapiro, I. I.
1991-01-01
An important source of error in VLBI estimates of baseline length is unmodeled variations of the refractivity of the neutral atmosphere along the propagation path of the radio signals. This paper presents and discusses the method of using data from a water vapor radiomete (WVR) to correct for the propagation delay caused by atmospheric water vapor, the major cause of these variations. Data from different WVRs are compared with estimated propagation delays obtained by Kalman filtering of the VLBI data themselves. The consequences of using either WVR data or Kalman filtering to correct for atmospheric propagation delay at the Onsala VLBI site are investigated by studying the repeatability of estimated baseline lengths from Onsala to several other sites. The repeatability obtained for baseline length estimates shows that the methods of water vapor radiometry and Kalman filtering offer comparable accuracies when applied to VLBI observations obtained in the climate of the Swedish west coast. For the most frequently measured baseline in this study, the use of WVR data yielded a 13 percent smaller weighted-root-mean-square (WRMS) scatter of the baseline length estimates compared to the use of a Kalman filter. It is also clear that the 'best' minimum elevationi angle for VLBI observations depends on the accuracy of the determinations of the total propagation delay to be used, since the error in this delay increases with increasing air mass.
Zhong, Xinke; Huo, Xing; Ren, Chao; Labed, Jelila; Li, Zhao-Liang
2016-01-01
Land Surface Temperature (LST) is a key parameter in climate systems. The methods for retrieving LST from hyperspectral thermal infrared data either require accurate atmospheric profile data or require thousands of continuous channels. We aim to retrieve LST for natural land surfaces from hyperspectral thermal infrared data using an adapted multi-channel method taking Land Surface Emissivity (LSE) properly into consideration. In the adapted method, LST can be retrieved by a linear function of 36 brightness temperatures at Top of Atmosphere (TOA) using channels where LSE has high values. We evaluated the adapted method using simulation data at nadir and satellite data near nadir. The Root Mean Square Error (RMSE) of the LST retrieved from the simulation data is 0.90 K. Compared with an LST product from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on Meteosat, the error in the LST retrieved from the Infared Atmospheric Sounding Interferometer (IASI) is approximately 1.6 K. The adapted method can be used for the near-real-time production of an LST product and to provide the physical method to simultaneously retrieve atmospheric profiles, LST, and LSE with a first-guess LST value. The limitations of the adapted method are that it requires the minimum LSE in the spectral interval of 800–950 cm−1 larger than 0.95 and it has not been extended for off-nadir measurements. PMID:27187408
Accommodative Behavior of Young Eyes Wearing Multifocal Contact Lenses.
Altoaimi, Basal H; Almutairi, Meznah S; Kollbaum, Pete S; Bradley, Arthur
2018-05-01
The effectiveness of multifocal contact lenses (MFCLs) at slowing myopia progression may hinge on the accommodative behavior of young eyes fit with these presbyopic style lenses. Can they remove hyperopic defocus? Convergence accommodation as well as pupil size and the zonal geometry are likely to contribute to the final accommodative responses. The aim of this study was to examine the accommodation behavior of young adult eyes wearing MFCLs and the effectiveness of these MFCLs at removing foveal hyperopic defocus when viewing near targets binocularly. Using a high-resolution Shack-Hartmann aberrometer, accommodation and pupil behavior of eight young adults (27.25 ± 2.05 years) were measured while subjects fixated a 20/40 character positioned between 2 m and 20 cm (0.50 to 5.00 diopters [D]) in 0.25-D steps. Refractive states were measured while viewing binocularly and monocularly with single-vision and both center-distance and center-near +2.00 D add MFCLs. Refractive state was defined using three criteria: the dioptric power that would (1) minimize the root mean square wavefront error, (2) focus the pupil center, and (3) provide the peak image quality. Refractive state pupil maps reveal the complex optics that exist in eyes wearing MFCLs. Reduced accommodative gain beyond the far point of the near add revealed that young subjects used the added plus power to help focus near targets. During accommodation to stimuli closer than the far point generated by the add power, a midperipheral region of the pupil was approximately focused, resulting in the smallest accommodative errors for the minimum root mean square-defined measures of refractive state. Paraxial images were always hyperopically or myopically defocused in eyes viewing binocularly with center-distance or center-near MFCLs, respectively. Because of zone geometry in the concentric MFCLs tested, the highly aberrated transition zone between the distance and near optics contributed a significant proportion and sometimes the majority of light to the resulting images. Young eyes fit with MFCLs containing significant transition zones accommodated to focus pupil regions between the near and distance optics, which resulted in less than optimal retinal image quality and myopic or hyperopic defocus in either the pupil center or pupil margins.
August Median Streamflow on Ungaged Streams in Eastern Aroostook County, Maine
Lombard, Pamela J.; Tasker, Gary D.; Nielsen, Martha G.
2003-01-01
Methods for estimating August median streamflow were developed for ungaged, unregulated streams in the eastern part of Aroostook County, Maine, with drainage areas from 0.38 to 43 square miles and mean basin elevations from 437 to 1,024 feet. Few long-term, continuous-record streamflow-gaging stations with small drainage areas were available from which to develop the equations; therefore, 24 partial-record gaging stations were established in this investigation. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record stations was applied by relating base-flow measurements at these stations to concurrent daily flows at nearby long-term, continuous-record streamflow- gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for varying periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Twenty-three partial-record stations and one continuous-record station were used for the final regression equations. The basin characteristics of drainage area and mean basin elevation are used in the calculated regression equation for ungaged streams to estimate August median flow. The equation has an average standard error of prediction from -38 to 62 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -40 to 67 percent. Model error is larger than sampling error for both equations, indicating that additional basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow, which can be used when making estimates at partial-record or continuous-record gaging stations, range from 0.03 to 11.7 cubic feet per second or from 0.1 to 0.4 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in the eastern part of Aroostook County, within the range of acceptable explanatory variables, range from 0.03 to 30 cubic feet per second or 0.1 to 0.7 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as mean elevation and drainage area increase.
Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui
2016-01-01
Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
Arterial Blood Flow Measurement Using Digital Subtraction Angiography (DSA)
NASA Astrophysics Data System (ADS)
Swanson, David K.; Myerowitz, P. David; Van Lysel, Michael S.; Peppler, Walter W.; Fields, Barry L.; Watson, Kim M.; O'Connor, Julia
1984-08-01
Standard angiography demonstrates the anatomy of arterial occlusive disease but not its physiological signficance. Using intravenous digital subtraction angiography (DSA), we investigated transit-time videodensitometric techniques in measuring femoral arterial flows in dogs. These methods have been successfully applied to intraarterial DSA but not to intravenous DSA. Eight 20 kg dogs were instrumented with an electromagnetic flow probe and a balloon occluder above an imaged segment of femoral artery. 20 cc of Renografin 76 was power injected at 15 cc/sec into the right atrium. Flow in the femoral artery was varied by partial balloon occlusion or peripheral dilatation following induced ischemia resulting in 51 flow measurements varying from 15 to 270 cc/min. Three different transit-time techniques were studied: crosscorrelation, mean square error, and two leading edge methods. Correlation between videodensitometry and flowmeter measurements using these different techniques ranged from 0.78 to 0.88 with a mean square error of 29 to 37 cc/min. Blood flow information using several different transit-time techniques can be obtained with intravenous DSA.
NASA Astrophysics Data System (ADS)
Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.
2017-12-01
In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.
[Adaptability of APSIM model in Southwestern China: A case study of winter wheat in Chongqing City].
Dai, Tong; Wang, Jing; He, Di; Zhang, Jian-ping; Wang, Na
2015-04-01
Field experimental data of winter wheat and parallel daily meteorological data at four typical stations in Chongqing City were used to calibrate and validate APSIM-wheat model and determine the genetic parameters for 12 varieties of winter wheat. The results showed that there was a good agreement between the simulated and observed growth periods from sowing to emergence, flowering and maturity of wheat. Root mean squared errors (RMSEs) between simulated and observed emergence, flowering and maturity were 0-3, 1-8, and 0-8 d, respectively. Normalized root mean squared errors (NRMSEs) between simulated and observed above-ground biomass for 12 study varieties were less than 30%. NRMSE between simulated and observed yields for 10 varieties out of 12 study varieties were less than 30%. APSIM-wheat model performed well in simulating phenology, aboveground biomass and yield of winter wheat in Chongqing City, which could provide a foundational support for assessing the impact of climate change on wheat production in the study area based on the model.
Colson, B.E.
1986-01-01
In 1964 the U.S. Geological Survey in Mississippi expanded the small stream gaging network for collection of rainfall and runoff data to 92 stations. To expedite availability of flood frequency information a rainfall-runoff model using available long-term rainfall data was calibrated to synthesize flood peaks. Results obtained from observed annual peak flow data for 51 sites having 16 yr to 30 yr of annual peaks are compared with the synthetic results. Graphical comparison of the 2, 5, 10, 25, 50, and 100-year flood discharges indicate good agreement. The root mean square error ranges from 27% to 38% and the synthetic record bias from -9% to -18% in comparison with the observed record. The reduced variance in the synthetic results is attributed to use of only four long-term rainfall records and model limitations. The root mean square error and bias is within the accuracy considered to be satisfactory. (Author 's abstract)
Radon-222 concentrations in ground water and soil gas on Indian reservations in Wisconsin
DeWild, John F.; Krohelski, James T.
1995-01-01
For sites with wells finished in the sand and gravel aquifer, the coefficient of determination (R2) of the regression of concentration of radon-222 in ground water as a function of well depth is 0.003 and the significance level is 0.32, which indicates that there is not a statistically significant relation between radon-222 concentrations in ground water and well depth. The coefficient of determination of the regression of radon-222 in ground water and soil gas is 0.19 and the root mean square error of the regression line is 271 picocuries per liter. Even though the significance level (0.036) indicates a statistical relation, the root mean square error of the regression is so large that the regression equation would not give reliable predictions. Because of an inadequate number of samples, similar statistical analyses could not be performed for sites with wells finished in the crystalline and sedimentary bedrock aquifers.
Ernst, Dominique; Köhler, Jürgen
2013-01-21
We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.
Climate Change and Its Impact on the Yield of Major Food Crops: Evidence from Pakistan
Ali, Sajjad; Liu, Ying; Ishaq, Muhammad; Shah, Tariq; Abdullah; Ilyas, Aasir; Din, Izhar Ud
2017-01-01
Pakistan is vulnerable to climate change, and extreme climatic conditions are threatening food security. This study examines the effects of climate change (e.g., maximum temperature, minimum temperature, rainfall, relative humidity, and the sunshine) on the major crops of Pakistan (e.g., wheat, rice, maize, and sugarcane). The methods of feasible generalized least square (FGLS) and heteroscedasticity and autocorrelation (HAC) consistent standard error were employed using time series data for the period 1989 to 2015. The results of the study reveal that maximum temperature adversely affects wheat production, while the effect of minimum temperature is positive and significant for all crops. Rainfall effect towards the yield of a selected crop is negative, except for wheat. To cope with and mitigate the adverse effects of climate change, there is a need for the development of heat- and drought-resistant high-yielding varieties to ensure food security in the country. PMID:28538704
Climate Change and Its Impact on the Yield of Major Food Crops: Evidence from Pakistan.
Ali, Sajjad; Liu, Ying; Ishaq, Muhammad; Shah, Tariq; Abdullah; Ilyas, Aasir; Din, Izhar Ud
2017-05-24
Pakistan is vulnerable to climate change, and extreme climatic conditions are threatening food security. This study examines the effects of climate change (e.g., maximum temperature, minimum temperature, rainfall, relative humidity, and the sunshine) on the major crops of Pakistan (e.g., wheat, rice, maize, and sugarcane). The methods of feasible generalized least square (FGLS) and heteroscedasticity and autocorrelation (HAC) consistent standard error were employed using time series data for the period 1989 to 2015. The results of the study reveal that maximum temperature adversely affects wheat production, while the effect of minimum temperature is positive and significant for all crops. Rainfall effect towards the yield of a selected crop is negative, except for wheat. To cope with and mitigate the adverse effects of climate change, there is a need for the development of heat- and drought-resistant high-yielding varieties to ensure food security in the country.
Segmentation-based L-filtering of speckle noise in ultrasonic images
NASA Astrophysics Data System (ADS)
Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis
1994-05-01
We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.
Prediction of acoustic feature parameters using myoelectric signals.
Lee, Ki-Seung
2010-07-01
It is well-known that a clear relationship exists between human voices and myoelectric signals (MESs) from the area of the speaker's mouth. In this study, we utilized this information to implement a speech synthesis scheme in which MES alone was used to predict the parameters characterizing the vocal-tract transfer function of specific speech signals. Several feature parameters derived from MES were investigated to find the optimal feature for maximization of the mutual information between the acoustic and the MES features. After the optimal feature was determined, an estimation rule for the acoustic parameters was proposed, based on a minimum mean square error (MMSE) criterion. In a preliminary study, 60 isolated words were used for both objective and subjective evaluations. The results showed that the average Euclidean distance between the original and predicted acoustic parameters was reduced by about 30% compared with the average Euclidean distance of the original parameters. The intelligibility of the synthesized speech signals using the predicted features was also evaluated. A word-level identification ratio of 65.5% and a syllable-level identification ratio of 73% were obtained through a listening test.
Segmenting overlapping nano-objects in atomic force microscopy image
NASA Astrophysics Data System (ADS)
Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko
2018-01-01
Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.
DTM Generation with Uav Based Photogrammetric Point Cloud
NASA Astrophysics Data System (ADS)
Polat, N.; Uysal, M.
2017-11-01
Nowadays Unmanned Aerial Vehicles (UAVs) are widely used in many applications for different purposes. Their benefits however are not entirely detected due to the integration capabilities of other equipment such as; digital camera, GPS, or laser scanner. The main scope of this paper is evaluating performance of cameras integrated UAV for geomatic applications by the way of Digital Terrain Model (DTM) generation in a small area. In this purpose, 7 ground control points are surveyed with RTK and 420 photographs are captured. Over 30 million georeferenced points were used in DTM generation process. Accuracy of the DTM was evaluated with 5 check points. The root mean square error is calculated as 17.1 cm for an altitude of 100 m. Besides, a LiDAR derived DTM is used as reference in order to calculate correlation. The UAV based DTM has o 94.5 % correlation with reference DTM. Outcomes of the study show that it is possible to use the UAV Photogrammetry data as map producing, surveying, and some other engineering applications with the advantages of low-cost, time conservation, and minimum field work.
Combined optimization of image-gathering and image-processing systems for scene feature detection
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Arduini, Robert F.; Samms, Richard W.
1987-01-01
The relationship between the image gathering and image processing systems for minimum mean squared error estimation of scene characteristics is investigated. A stochastic optimization problem is formulated where the objective is to determine a spatial characteristic of the scene rather than a feature of the already blurred, sampled and noisy image data. An analytical solution for the optimal characteristic image processor is developed. The Wiener filter for the sampled image case is obtained as a special case, where the desired characteristic is scene restoration. Optimal edge detection is investigated using the Laplacian operator x G as the desired characteristic, where G is a two dimensional Gaussian distribution function. It is shown that the optimal edge detector compensates for the blurring introduced by the image gathering optics, and notably, that it is not circularly symmetric. The lack of circular symmetry is largely due to the geometric effects of the sampling lattice used in image acquisition. The optimal image gathering optical transfer function is also investigated and the results of a sensitivity analysis are shown.
AgRISTARS: Supporting research. Spring small grains planting date distribution model
NASA Technical Reports Server (NTRS)
Hodges, T.; Artley, J. A. (Principal Investigator)
1981-01-01
A model was developed using 996 planting dates at 51 LANDSAT segments for spring wheat and spring barley in Minnesota, Montana, North Dakota, and South Dakota in 1979. Daily maximum and minimum temperatures and precipitation were obtained from the cooperative weather stations nearest to each segment. The model uses a growing degree day summation modified for daily temperature range to estimate the beginning of planting and uses a soil surface wetness variable to estimate how a fixed number of planting days are distributed after planting begins. For 1979, the model predicts first, median, and last planting dates with root mean square errors of 7.91, 6.61, and 7.09 days, respectively. The model also provides three or four dates to represent periods of planting activity within the planting season. Although the full model was not tested on an independent data set, it may be suitable in areas other than the U.S. Great Plains where spring small grains are planted as soon as soil and air temperatures become warm enough in the spring for plant growth.
Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P
2017-01-01
Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.
[Near infrared spectroscopy study on water content in turbine oil].
Chen, Bin; Liu, Ge; Zhang, Xian-Ming
2013-11-01
Near infrared (NIR) spectroscopy combined with successive projections algorithm (SPA) was investigated for determination of water content in turbine oil. Through the 57 samples of different water content in turbine oil scanned applying near infrared (NIR) spectroscopy, with the water content in the turbine oil of 0-0.156%, different pretreatment methods such as the original spectra, first derivative spectra and differential polynomial least squares fitting algorithm Savitzky-Golay (SG), and successive projections algorithm (SPA) were applied for the extraction of effective wavelengths, the correlation coefficient (R) and root mean square error (RMSE) were used as the model evaluation indices, accordingly water content in turbine oil was investigated. The results indicated that the original spectra with different water content in turbine oil were pretreated by the performance of first derivative + SG pretreatments, then the selected effective wavelengths were used as the inputs of least square support vector machine (LS-SVM). A total of 16 variables selected by SPA were employed to construct the model of SPA and least square support vector machine (SPA-LS-SVM). There is 9 as The correlation coefficient was 0.975 9 and the root of mean square error of validation set was 2.655 8 x 10(-3) using the model, and it is feasible to determine the water content in oil using near infrared spectroscopy and SPA-LS-SVM, and an excellent prediction precision was obtained. This study supplied a new and alternative approach to the further application of near infrared spectroscopy in on-line monitoring of contamination such as water content in oil.
Gompertzian stochastic model with delay effect to cervical cancer growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah
2015-02-03
In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.
Measurement System Characterization in the Presence of Measurement Errors
NASA Technical Reports Server (NTRS)
Commo, Sean A.
2012-01-01
In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.
da Silva, Fabiana E B; Flores, Érico M M; Parisotto, Graciele; Müller, Edson I; Ferrão, Marco F
2016-03-01
An alternative method for the quantification of sulphametoxazole (SMZ) and trimethoprim (TMP) using diffuse reflectance infrared Fourier-transform spectroscopy (DRIFTS) and partial least square regression (PLS) was developed. Interval Partial Least Square (iPLS) and Synergy Partial Least Square (siPLS) were applied to select a spectral range that provided the lowest prediction error in comparison to the full-spectrum model. Fifteen commercial tablet formulations and forty-nine synthetic samples were used. The ranges of concentration considered were 400 to 900 mg g-1SMZ and 80 to 240 mg g-1 TMP. Spectral data were recorded between 600 and 4000 cm-1 with a 4 cm-1 resolution by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The proposed procedure was compared to high performance liquid chromatography (HPLC). The results obtained from the root mean square error of prediction (RMSEP), during the validation of the models for samples of sulphamethoxazole (SMZ) and trimethoprim (TMP) using siPLS, demonstrate that this approach is a valid technique for use in quantitative analysis of pharmaceutical formulations. The selected interval algorithm allowed building regression models with minor errors when compared to the full spectrum PLS model. A RMSEP of 13.03 mg g-1for SMZ and 4.88 mg g-1 for TMP was obtained after the selection the best spectral regions by siPLS.
Moss, Marshall E.; Gilroy, Edward J.
1980-01-01
This report describes the theoretical developments and illustrates the applications of techniques that recently have been assembled to analyze the cost-effectiveness of federally funded stream-gaging activities in support of the Colorado River compact and subsequent adjudications. The cost effectiveness of 19 stream gages in terms of minimizing the sum of the variances of the errors of estimation of annual mean discharge is explored by means of a sequential-search optimization scheme. The search is conducted over a set of decision variables that describes the number of times that each gaging route is traveled in a year. A gage route is defined as the most expeditious circuit that is made from a field office to visit one or more stream gages and return to the office. The error variance is defined as a function of the frequency of visits to a gage by using optimal estimation theory. Currently a minimum of 12 visits per year is made to any gage. By changing to a six-visit minimum, the same total error variance can be attained for the 19 stations with a budget of 10% less than the current one. Other strategies are also explored. (USGS)
Heddam, Salim
2014-01-01
In this study, we present application of an artificial intelligence (AI) technique model called dynamic evolving neural-fuzzy inference system (DENFIS) based on an evolving clustering method (ECM), for modelling dissolved oxygen concentration in a river. To demonstrate the forecasting capability of DENFIS, a one year period from 1 January 2009 to 30 December 2009, of hourly experimental water quality data collected by the United States Geological Survey (USGS Station No: 420853121505500) station at Klamath River at Miller Island Boat Ramp, OR, USA, were used for model development. Two DENFIS-based models are presented and compared. The two DENFIS systems are: (1) offline-based system named DENFIS-OF, and (2) online-based system, named DENFIS-ON. The input variables used for the two models are water pH, temperature, specific conductance, and sensor depth. The performances of the models are evaluated using root mean square errors (RMSE), mean absolute error (MAE), Willmott index of agreement (d) and correlation coefficient (CC) statistics. The lowest root mean square error and highest correlation coefficient values were obtained with the DENFIS-ON method. The results obtained with DENFIS models are compared with linear (multiple linear regression, MLR) and nonlinear (multi-layer perceptron neural networks, MLPNN) methods. This study demonstrates that DENFIS-ON investigated herein outperforms all the proposed techniques for DO modelling.
NASA Astrophysics Data System (ADS)
Sergeev, A. P.; Tarasov, D. A.; Buevich, A. G.; Shichkin, A. V.; Tyagunov, A. G.; Medvedev, A. N.
2017-06-01
Modeling of spatial distribution of pollutants in the urbanized territories is difficult, especially if there are multiple emission sources. When monitoring such territories, it is often impossible to arrange the necessary detailed sampling. Because of this, the usual methods of analysis and forecasting based on geostatistics are often less effective. Approaches based on artificial neural networks (ANNs) demonstrate the best results under these circumstances. This study compares two models based on ANNs, which are multilayer perceptron (MLP) and generalized regression neural networks (GRNNs) with the base geostatistical method - kriging. Models of the spatial dust distribution in the snow cover around the existing copper quarry and in the area of emissions of a nickel factory were created. To assess the effectiveness of the models three indices were used: the mean absolute error (MAE), the root-mean-square error (RMSE), and the relative root-mean-square error (RRMSE). Taking into account all indices the model of GRNN proved to be the most accurate which included coordinates of the sampling points and the distance to the likely emission source as input parameters for the modeling. Maps of spatial dust distribution in the snow cover were created in the study area. It has been shown that the models based on ANNs were more accurate than the kriging, particularly in the context of a limited data set.
Chen, Hui; Fan, Li; Wu, Wei; Liu, Hong-Bin
2017-09-26
Soil moisture data can reflect valuable information on soil properties, terrain features, and drought condition. The current study compared and assessed the performance of different interpolation methods for estimating soil moisture in an area with complex topography in southwest China. The approaches were inverse distance weighting, multifarious forms of kriging, regularized spline with tension, and thin plate spline. The 5-day soil moisture observed at 167 stations and daily temperature recorded at 33 stations during the period of 2010-2014 were used in the current work. Model performance was tested with accuracy indicators of determination coefficient (R 2 ), mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and modeling efficiency (ME). The results indicated that inverse distance weighting had the best performance with R 2 , MAPE, RMSE, RRMSE, and ME of 0.32, 14.37, 13.02%, 0.16, and 0.30, respectively. Based on the best method, a spatial database of soil moisture was developed and used to investigate drought condition over the study area. The results showed that the distribution of drought was characterized by evidently regional difference. Besides, drought mainly occurred in August and September in the 5 years and was prone to happening in the western and central parts rather than in the northeastern and southeastern areas.
Mapping health outcome measures from a stroke registry to EQ-5D weights.
Ghatnekar, Ola; Eriksson, Marie; Glader, Eva-Lotta
2013-03-07
To map health outcome related variables from a national register, not part of any validated instrument, with EQ-5D weights among stroke patients. We used two cross-sectional data sets including patient characteristics, outcome variables and EQ-5D weights from the national Swedish stroke register. Three regression techniques were used on the estimation set (n=272): ordinary least squares (OLS), Tobit, and censored least absolute deviation (CLAD). The regression coefficients for "dressing", "toileting", "mobility", "mood", "general health" and "proxy-responders" were applied to the validation set (n=272), and the performance was analysed with mean absolute error (MAE) and mean square error (MSE). The number of statistically significant coefficients varied by model, but all models generated consistent coefficients in terms of sign. Mean utility was underestimated in all models (least in OLS) and with lower variation (least in OLS) compared to the observed. The maximum attainable EQ-5D weight ranged from 0.90 (OLS) to 1.00 (Tobit and CLAD). Health states with utility weights <0.5 had greater errors than those with weights ≥ 0.5 (P<0.01). This study indicates that it is possible to map non-validated health outcome measures from a stroke register into preference-based utilities to study the development of stroke care over time, and to compare with other conditions in terms of utility.
Prediction of the compression ratio for municipal solid waste using decision tree.
Heshmati R, Ali Akbar; Mokhtari, Maryam; Shakiba Rad, Saeed
2014-01-01
The compression ratio of municipal solid waste (MSW) is an essential parameter for evaluation of waste settlement and landfill design. However, no appropriate model has been proposed to estimate the waste compression ratio so far. In this study, a decision tree method was utilized to predict the waste compression ratio (C'c). The tree was constructed using Quinlan's M5 algorithm. A reliable database retrieved from the literature was used to develop a practical model that relates C'c to waste composition and properties, including dry density, dry weight water content, and percentage of biodegradable organic waste using the decision tree method. The performance of the developed model was examined in terms of different statistical criteria, including correlation coefficient, root mean squared error, mean absolute error and mean bias error, recommended by researchers. The obtained results demonstrate that the suggested model is able to evaluate the compression ratio of MSW effectively.
Jeyasingh, Suganthi; Veluchamy, Malathi
2017-05-01
Early diagnosis of breast cancer is essential to save lives of patients. Usually, medical datasets include a large variety of data that can lead to confusion during diagnosis. The Knowledge Discovery on Database (KDD) process helps to improve efficiency. It requires elimination of inappropriate and repeated data from the dataset before final diagnosis. This can be done using any of the feature selection algorithms available in data mining. Feature selection is considered as a vital step to increase the classification accuracy. This paper proposes a Modified Bat Algorithm (MBA) for feature selection to eliminate irrelevant features from an original dataset. The Bat algorithm was modified using simple random sampling to select the random instances from the dataset. Ranking was with the global best features to recognize the predominant features available in the dataset. The selected features are used to train a Random Forest (RF) classification algorithm. The MBA feature selection algorithm enhanced the classification accuracy of RF in identifying the occurrence of breast cancer. The Wisconsin Diagnosis Breast Cancer Dataset (WDBC) was used for estimating the performance analysis of the proposed MBA feature selection algorithm. The proposed algorithm achieved better performance in terms of Kappa statistic, Mathew’s Correlation Coefficient, Precision, F-measure, Recall, Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Relative Absolute Error (RAE) and Root Relative Squared Error (RRSE). Creative Commons Attribution License
NASA Astrophysics Data System (ADS)
Kiamehr, Ramin
2016-04-01
One arc-second high resolution version of the SRTM model recently published for the Iran by the US Geological Survey database. Digital Elevation Models (DEM) is widely used in different disciplines and applications by geoscientist. It is an essential data in geoid computation procedure, e.g., to determine the topographic, downward continuation (DWC) and atmospheric corrections. Also, it can be used in road location and design in civil engineering and hydrological analysis. However, a DEM is only a model of the elevation surface and it is subject to errors. The most important parts of errors could be comes from the bias in height datum. On the other hand, the accuracy of DEM is usually published in global sense and it is important to have estimation about the accuracy in the area of interest before using of it. One of the best methods to have a reasonable indication about the accuracy of DEM is obtained from the comparison of their height versus the precise national GPS/levelling data. It can be done by the determination of the Root-Mean-Square (RMS) of fitting between the DEM and leveling heights. The errors in the DEM can be approximated by different kinds of functions in order to fit the DEMs to a set of GPS/levelling data using the least squares adjustment. In the current study, several models ranging from a simple linear regression to seven parameter similarity transformation model are used in fitting procedure. However, the seven parameter model gives the best fitting with minimum standard division in all selected DEMs in the study area. Based on the 35 precise GPS/levelling data we obtain a RMS of 7 parameter fitting for SRTM DEM 5.5 m, The corrective surface model in generated based on the transformation parameters and included to the original SRTM model. The result of fitting in combined model is estimated again by independent GPS/leveling data. The result shows great improvement in absolute accuracy of the model with the standard deviation of 3.4 meter.
Estimation of population mean under systematic sampling
NASA Astrophysics Data System (ADS)
Noor-ul-amin, Muhammad; Javaid, Amjad
2017-11-01
In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.
Zhang, Xuan; Li, Wei; Yin, Bin; Chen, Weizhong; Kelly, Declan P; Wang, Xiaoxin; Zheng, Kaiyi; Du, Yiping
2013-10-01
Coffee is the most heavily consumed beverage in the world after water, for which quality is a key consideration in commercial trade. Therefore, caffeine content which has a significant effect on the final quality of the coffee products requires to be determined fast and reliably by new analytical techniques. The main purpose of this work was to establish a powerful and practical analytical method based on near infrared spectroscopy (NIRS) and chemometrics for quantitative determination of caffeine content in roasted Arabica coffees. Ground coffee samples within a wide range of roasted levels were analyzed by NIR, meanwhile, in which the caffeine contents were quantitative determined by the most commonly used HPLC-UV method as the reference values. Then calibration models based on chemometric analyses of the NIR spectral data and reference concentrations of coffee samples were developed. Partial least squares (PLS) regression was used to construct the models. Furthermore, diverse spectra pretreatment and variable selection techniques were applied in order to obtain robust and reliable reduced-spectrum regression models. Comparing the respective quality of the different models constructed, the application of second derivative pretreatment and stability competitive adaptive reweighted sampling (SCARS) variable selection provided a notably improved regression model, with root mean square error of cross validation (RMSECV) of 0.375 mg/g and correlation coefficient (R) of 0.918 at PLS factor of 7. An independent test set was used to assess the model, with the root mean square error of prediction (RMSEP) of 0.378 mg/g, mean relative error of 1.976% and mean relative standard deviation (RSD) of 1.707%. Thus, the results provided by the high-quality calibration model revealed the feasibility of NIR spectroscopy for at-line application to predict the caffeine content of unknown roasted coffee samples, thanks to the short analysis time of a few seconds and non-destructive advantages of NIRS. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Li, Wei; Yin, Bin; Chen, Weizhong; Kelly, Declan P.; Wang, Xiaoxin; Zheng, Kaiyi; Du, Yiping
2013-10-01
Coffee is the most heavily consumed beverage in the world after water, for which quality is a key consideration in commercial trade. Therefore, caffeine content which has a significant effect on the final quality of the coffee products requires to be determined fast and reliably by new analytical techniques. The main purpose of this work was to establish a powerful and practical analytical method based on near infrared spectroscopy (NIRS) and chemometrics for quantitative determination of caffeine content in roasted Arabica coffees. Ground coffee samples within a wide range of roasted levels were analyzed by NIR, meanwhile, in which the caffeine contents were quantitative determined by the most commonly used HPLC-UV method as the reference values. Then calibration models based on chemometric analyses of the NIR spectral data and reference concentrations of coffee samples were developed. Partial least squares (PLS) regression was used to construct the models. Furthermore, diverse spectra pretreatment and variable selection techniques were applied in order to obtain robust and reliable reduced-spectrum regression models. Comparing the respective quality of the different models constructed, the application of second derivative pretreatment and stability competitive adaptive reweighted sampling (SCARS) variable selection provided a notably improved regression model, with root mean square error of cross validation (RMSECV) of 0.375 mg/g and correlation coefficient (R) of 0.918 at PLS factor of 7. An independent test set was used to assess the model, with the root mean square error of prediction (RMSEP) of 0.378 mg/g, mean relative error of 1.976% and mean relative standard deviation (RSD) of 1.707%. Thus, the results provided by the high-quality calibration model revealed the feasibility of NIR spectroscopy for at-line application to predict the caffeine content of unknown roasted coffee samples, thanks to the short analysis time of a few seconds and non-destructive advantages of NIRS.
NASA Astrophysics Data System (ADS)
Robbins, Dennis; Ford, S.
2014-01-01
The NSF-supported “AstroCom NYC” program, a collaboration of the City University of New York, American Museum of Natural History (AMNH), and Columbia University has the explicit goal of increasing the participation of underrepresented minorities in astronomy and astrophysics by providing pedagogical mentoring and research experiences to undergraduate students. To supplement AstroCom scholars' undergraduate course work, and as a gateway to summer astrophysics research opportunities, we implemented a course called “Methods of Scientific Research” (MSR). The semester-long MSR course emphasizes the study of data using computers and other digital tools in a laboratory environment that encourages collaborative and active learning. We enroll early physical science majors and deliberately seek to inculcate habits of mind needed for science research, including assigning physical meaning to variables and measurements; engaging in mathematical modeling; quantifying error; eliminating bias; proposing hypotheses; creating predictions; testing predictions. Using laptop computers interfaced with probeware, students collect and analyze data using graphing software. Students study concepts such as motion, temperature, magnetism, electricity, gas pressure, and force with open-ended investigations where large data sets can be readily collected and replicated during a course meeting. Students are guided to examine data for patterns and trends, to make meaning of descriptive statistics such as means, standard deviations, maximum and minimum values, correlation coefficients and root mean square error values, and in general to understand, judge, and describe the studied phenomena based on data. A secondary goal of the course is to familiarize students with the facilities at AMNH, where they will do summer research as part of AstroCom NYC, in an effort to build a sense of belonging and to help them begin to self-identify as a scientist. We will discuss some our activities and present our ideas on encouraging habits of mind needed in astrophysics research.
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas
2015-06-01
The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.