Sample records for square error sense

  1. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  2. Model-based mean square error estimators for k-nearest neighbour predictions and applications using remotely sensed data for forest inventories

    Treesearch

    Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo

    2009-01-01

    New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...

  3. On-board error correction improves IR earth sensor accuracy

    NASA Astrophysics Data System (ADS)

    Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.

    1989-10-01

    Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.

  4. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P. D.; Marini, J. W.

    1977-01-01

    A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.

  5. [Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].

    PubMed

    Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling

    2013-12-01

    Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.

  6. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  7. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  8. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  9. The mean-square error optimal linear discriminant function and its application to incomplete data vectors

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1979-01-01

    In many pattern recognition problems, data vectors are classified although one or more of the data vector elements are missing. This problem occurs in remote sensing when the ground is obscured by clouds. Optimal linear discrimination procedures for classifying imcomplete data vectors are discussed.

  10. Aerodynamic coefficient identification package dynamic data accuracy determinations: Lessons learned

    NASA Technical Reports Server (NTRS)

    Heck, M. L.; Findlay, J. T.; Compton, H. R.

    1983-01-01

    The errors in the dynamic data output from the Aerodynamic Coefficient Identification Packages (ACIP) flown on Shuttle flights 1, 3, 4, and 5 were determined using the output from the Inertial Measurement Units (IMU). A weighted least-squares batch algorithm was empolyed. Using an averaging technique, signal detection was enhanced; this allowed improved calibration solutions. Global errors as large as 0.04 deg/sec for the ACIP gyros, 30 mg for linear accelerometers, and 0.5 deg/sec squared in the angular accelerometer channels were detected and removed with a combination is bias, scale factor, misalignment, and g-sensitive calibration constants. No attempt was made to minimize local ACIP dynamic data deviations representing sensed high-frequency vibration or instrument noise. Resulting 1sigma calibrated ACIP global accuracies were within 0.003 eg/sec, 1.0 mg, and 0.05 deg/sec squared for the gyros, linear accelerometers, and angular accelerometers, respectively.

  11. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

    NASA Astrophysics Data System (ADS)

    Gui, Guan; Xu, Li; Adachi, Fumiyuki

    2014-12-01

    Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

  12. Estimation of Soil Moisture Profile using a Simple Hydrology Model and Passive Microwave Remote Sensing

    NASA Technical Reports Server (NTRS)

    Soman, Vishwas V.; Crosson, William L.; Laymon, Charles; Tsegaye, Teferi

    1998-01-01

    Soil moisture is an important component of analysis in many Earth science disciplines. Soil moisture information can be obtained either by using microwave remote sensing or by using a hydrologic model. In this study, we combined these two approaches to increase the accuracy of profile soil moisture estimation. A hydrologic model was used to analyze the errors in the estimation of soil moisture using the data collected during Huntsville '96 microwave remote sensing experiment in Huntsville, Alabama. Root mean square errors (RMSE) in soil moisture estimation increase by 22% with increase in the model input interval from 6 hr to 12 hr for the grass-covered plot. RMSEs were reduced for given model time step by 20-50% when model soil moisture estimates were updated using remotely-sensed data. This methodology has a potential to be employed in soil moisture estimation using rainfall data collected by a space-borne sensor, such as the Tropical Rainfall Measuring Mission (TRMM) satellite, if remotely-sensed data are available to update the model estimates.

  13. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Marini, J.

    1979-01-01

    The implementation of satellite-based Doppler positioning systems frequently requires the recovery of transmitter position from a single pass of Doppler data. The least-squares approach to the problem yields conjugate solutions on either side of the satellite subtrack. It is important to develop a procedure for choosing the proper solution which is correct in a high percentage of cases. A test for ambiguity resolution which is the most powerful in the sense that it maximizes the probability of a correct decision is derived. When systematic error sources are properly included in the least-squares reduction process to yield an optimal solution the test reduces to choosing the solution which provides the smaller valuation of the least-squares loss function. When systematic error sources are ignored in the least-squares reduction, the most powerful test is a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudoinverse of a reduced-rank square matrix. A formula for computing the power of the most powerful test is provided. Numerical examples are included in which the power of the test is computed for situations that are relevant to the design of a satellite-aided search and rescue system.

  14. Optimal wavefront control for adaptive segmented mirrors

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    A ground-based astronomical telescope with a segmented primary mirror will suffer image-degrading wavefront aberrations from at least two sources: (1) atmospheric turbulence and (2) segment misalignment or figure errors of the mirror itself. This paper describes the derivation of a mirror control feedback matrix that assumes the presence of both types of aberration and is optimum in the sense that it minimizes the mean-squared residual wavefront error. Assumptions of the statistical nature of the wavefront measurement errors, atmospheric phase aberrations, and segment misalignment errors are made in the process of derivation. Examples of the degree of correlation are presented for three different types of wavefront measurement data and compared to results of simple corrections.

  15. Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains

    NASA Technical Reports Server (NTRS)

    Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang

    2013-01-01

    Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.

  16. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kookjin; Carlberg, Kevin; Elman, Howard C.

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weightedmore » $$\\ell^2$$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $$\\ell^2$$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.« less

  17. An agreement coefficient for image comparison

    USGS Publications Warehouse

    Ji, Lei; Gallo, Kevin

    2006-01-01

    Combination of datasets acquired from different sensor systems is necessary to construct a long time-series dataset for remotely sensed land-surface variables. Assessment of the agreement of the data derived from various sources is an important issue in understanding the data continuity through the time-series. Some traditional measures, including correlation coefficient, coefficient of determination, mean absolute error, and root mean square error, are not always optimal for evaluating the data agreement. For this reason, we developed a new agreement coefficient for comparing two different images. The agreement coefficient has the following properties: non-dimensional, bounded, symmetric, and distinguishable between systematic and unsystematic differences. The paper provides examples of agreement analyses for hypothetical data and actual remotely sensed data. The results demonstrate that the agreement coefficient does include the above properties, and therefore is a useful tool for image comparison.

  18. [Study on predicting sugar content and valid acidity of apples by near infrared diffuse reflectance technique].

    PubMed

    Liu, Yan-de; Ying, Yi-bin; Fu, Xia-ping

    2005-11-01

    The nondestructive method for quantifying sugar content (SC) and available acid (VA) of intact apples using diffuse near infrared reflectance and optical fiber sensing techniques were explored in the present research. The standard sample sets and prediction models were established by partial least squares analysis (PLS). A total of 120 Shandong Fuji apples were tested in the wave number of 12,500 - 4000 cm(-1) using Fourier transform near infrared spectroscopy. The results of the research indicated that the nondestructive quantification of SC and VA, gave a high correlation coefficient 0.970 and 0.906, a low root mean square error of prediction (RMSEP) 0.272 and 0.056 2, a low root mean square error of calibration (RMSEC) 0.261 and 0.0677, and a small difference between RMSEP and RMSEC 0.011 a nd 0.0115. It was suggested that the diffuse nearinfrared reflectance technique be feasible for nondestructive determination of apple sugar content in the wave number range of 10,341 - 5461 cm(-1) and for available acid in the wave number range of 10,341 - 3818 cm(-1).

  19. Spectral estimates of net radiation and soil heat flux

    USGS Publications Warehouse

    Daughtry, C.S.T.; Kustas, William P.; Moran, M.S.; Pinter, P. J.; Jackson, R. D.; Brown, P.W.; Nichols, W.D.; Gay, L.W.

    1990-01-01

    Conventional methods of measuring surface energy balance are point measurements and represent only a small area. Remote sensing offers a potential means of measuring outgoing fluxes over large areas at the spatial resolution of the sensor. The objective of this study was to estimate net radiation (Rn) and soil heat flux (G) using remotely sensed multispectral data acquired from an aircraft over large agricultural fields. Ground-based instruments measured Rn and G at nine locations along the flight lines. Incoming fluxes were also measured by ground-based instruments. Outgoing fluxes were estimated using remotely sensed data. Remote Rn, estimated as the algebraic sum of incoming and outgoing fluxes, slightly underestimated Rn measured by the ground-based net radiometers. The mean absolute errors for remote Rn minus measured Rn were less than 7%. Remote G, estimated as a function of a spectral vegetation index and remote Rn, slightly overestimated measured G; however, the mean absolute error for remote G was 13%. Some of the differences between measured and remote values of Rn and G are associated with differences in instrument designs and measurement techniques. The root mean square error for available energy (Rn - G) was 12%. Thus, methods using both ground-based and remotely sensed data can provide reliable estimates of the available energy which can be partitioned into sensible and latent heat under nonadvective conditions. ?? 1990.

  20. A Novel Low-Cost, Large Curvature Bend Sensor Based on a Bowden-Cable

    PubMed Central

    Jeong, Useok; Cho, Kyu-Jin

    2016-01-01

    Bend sensors have been developed based on conductive ink, optical fiber, and electronic textiles. Each type has advantages and disadvantages in terms of performance, ease of use, and cost. This study proposes a new and low-cost bend sensor that can measure a wide range of accumulated bend angles with large curvatures. This bend sensor utilizes a Bowden-cable, which consists of a coil sheath and an inner wire. Displacement changes of the Bowden-cable’s inner wire, when the shape of the sheath changes, have been considered to be a position error in previous studies. However, this study takes advantage of this position error to detect the bend angle of the sheath. The bend angle of the sensor can be calculated from the displacement measurement of the sensing wire using a Hall-effect sensor or a potentiometer. Simulations and experiments have shown that the accumulated bend angle of the sensor is linearly related to the sensor signal, with an R-square value up to 0.9969 and a root mean square error of 2% of the full sensing range. The proposed sensor is not affected by a bend curvature of up to 80.0 m−1, unlike previous bend sensors. The proposed sensor is expected to be useful for various applications, including motion capture devices, wearable robots, surgical devices, or generally any device that requires an affordable and low-cost bend sensor. PMID:27347959

  1. Channel estimation based on quantized MMP for FDD massive MIMO downlink

    NASA Astrophysics Data System (ADS)

    Guo, Yao-ting; Wang, Bing-he; Qu, Yi; Cai, Hua-jie

    2016-10-01

    In this paper, we consider channel estimation for Massive MIMO systems operating in frequency division duplexing mode. By exploiting the sparsity of propagation paths in Massive MIMO channel, we develop a compressed sensing(CS) based channel estimator which can reduce the pilot overhead. As compared with the conventional least squares (LS) and linear minimum mean square error(LMMSE) estimation, the proposed algorithm is based on the quantized multipath matching pursuit - MMP - reduced the pilot overhead and performs better than other CS algorithms. The simulation results demonstrate the advantage of the proposed algorithm over various existing methods including the LS, LMMSE, CoSaMP and conventional MMP estimators.

  2. Sensitivity of thermal inertia calculations to variations in environmental factors. [in mapping of Earth's surface by remote sensing

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Alley, R. E.; Schieldge, J. P.

    1984-01-01

    The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.

  3. Efficient parallel reconstruction for high resolution multishot spiral diffusion data with low rank constraint.

    PubMed

    Liao, Congyu; Chen, Ying; Cao, Xiaozhi; Chen, Song; He, Hongjian; Mani, Merry; Jacob, Mathews; Magnotta, Vincent; Zhong, Jianhui

    2017-03-01

    To propose a novel reconstruction method using parallel imaging with low rank constraint to accelerate high resolution multishot spiral diffusion imaging. The undersampled high resolution diffusion data were reconstructed based on a low rank (LR) constraint using similarities between the data of different interleaves from a multishot spiral acquisition. The self-navigated phase compensation using the low resolution phase data in the center of k-space was applied to correct shot-to-shot phase variations induced by motion artifacts. The low rank reconstruction was combined with sensitivity encoding (SENSE) for further acceleration. The efficiency of the proposed joint reconstruction framework, dubbed LR-SENSE, was evaluated through error quantifications and compared with ℓ1 regularized compressed sensing method and conventional iterative SENSE method using the same datasets. It was shown that with a same acceleration factor, the proposed LR-SENSE method had the smallest normalized sum-of-squares errors among all the compared methods in all diffusion weighted images and DTI-derived index maps, when evaluated with different acceleration factors (R = 2, 3, 4) and for all the acquired diffusion directions. Robust high resolution diffusion weighted image can be efficiently reconstructed from highly undersampled multishot spiral data with the proposed LR-SENSE method. Magn Reson Med 77:1359-1366, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Monitoring Method of Cutting Force by Using Additional Spindle Sensors

    NASA Astrophysics Data System (ADS)

    Sarhan, Ahmed Aly Diaa; Matsubara, Atsushi; Sugihara, Motoyuki; Saraie, Hidenori; Ibaraki, Soichi; Kakino, Yoshiaki

    This paper describes a monitoring method of cutting forces for end milling process by using displacement sensors. Four eddy-current displacement sensors are installed on the spindle housing of a machining center so that they can detect the radial motion of the rotating spindle. Thermocouples are also attached to the spindle structure in order to examine the thermal effect in the displacement sensing. The change in the spindle stiffness due to the spindle temperature and the speed is investigated as well. Finally, the estimation performance of cutting forces using the spindle displacement sensors is experimentally investigated by machining tests on carbon steel in end milling operations under different cutting conditions. It is found that the monitoring errors are attributable to the thermal displacement of the spindle, the time lag of the sensing system, and the modeling error of the spindle stiffness. It is also shown that the root mean square errors between estimated and measured amplitudes of cutting forces are reduced to be less than 20N with proper selection of the linear stiffness.

  5. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  6. Construction and Application of Enhanced Remote Sensing Ecological Index

    NASA Astrophysics Data System (ADS)

    Wang, X.; Liu, C.; Fu, Q.; Yin, B.

    2018-04-01

    In order to monitor the change of regional ecological environment quality, this paper use MODIS and DMSP / OLS remote sensing data, from the production capacity, external disturbance changes and human socio-economic development of the three main factors affecting the quality of ecosystems, select the net primary productivity, vegetation index and light index, using the principal component analysis method to automatically determine the weight coefficient, construction of the formation of enhanced remote sensing ecological index, and the ecological environment quality of Hainan Island from 2001 to 2013 was monitored and analyzed. The enhanced remote sensing ecological index combines the effects of the natural environment and human activities on ecosystems, and according to the contribution of each principal component automatically determine the weight coefficient, avoid the design of the weight of the parameters caused by the calculation of the human error, which provides a new method for the operational operation of regional macro ecological environment quality monitoring. During the period from 2001 to 2013, the ecological environment quality of Hainan Island showed the characteristics of decend first and then rise, the ecological environment in 2005 was affected by severe natural disasters, and the quality of ecological environment dropped sharply. Compared with 2001, in 2013 about 20000 square kilometers regional ecological environmental quality has improved, about 8760 square kilometers regional ecological environment quality is relatively stable, about 5272 square kilometers regional ecological environment quality has decreased. On the whole, the quality of ecological environment in the study area is good, the frequent occurrence of natural disasters, on the quality of the ecological environment to a certain extent.

  7. [Rapid discriminating hogwash oil and edible vegetable oil using near infrared optical fiber spectrometer technique].

    PubMed

    Zhang, Bing-Fang; Yuan, Li-Bo; Kong, Qing-Ming; Shen, Wei-Zheng; Zhang, Bing-Xiu; Liu, Cheng-Hai

    2014-10-01

    In the present study, a new method using near infrared spectroscopy combined with optical fiber sensing technology was applied to the analysis of hogwash oil in blended oil. The 50 samples were a blend of frying oil and "nine three" soybean oil according to a certain volume ratio. The near infrared transmission spectroscopies were collected and the quantitative analysis model of frying oil was established by partial least squares (PLS) and BP artificial neural network The coefficients of determina- tion of calibration sets were 0.908 and 0.934 respectively. The coefficients of determination of validation sets were 0.961 and 0.952, the root mean square error of calibrations (RMSEC) was 0.184 and 0.136, and the root mean square error of predictions (RMSEP) was all 0.111 6. They conform to the model application requirement. At the same time, frying oil and qualified edible oil were identified with the principal component analysis (PCA), and the accurate rate was 100%. The experiment proved that near infrared spectral technology not only can quickly and accurately identify hogwash oil, but also can quantitatively detect hog- wash oil. This method has a wide application prospect in the detection of oil.

  8. Influence of hemoglobin on non-invasive optical bilirubin sensing

    NASA Astrophysics Data System (ADS)

    Jiang, Jingying; Gong, Qiliang; Zou, Da; Xu, Kexin

    2012-03-01

    Since the abnormal metabolism of bilirubin could lead to diseases in the human body, especially the jaundice which is harmful to neonates. Traditional invasive measurements are difficult to be accepted by people because of pain and infection. Therefore, the real-time and non-invasive measurement of bilirubin is of great significance. However, the accuracy of currently transcutaneous bilirubinometry(TcB) is generally not high enough, and affected by many factors in the human skin, mostly by hemoglobin. In this talk, absorption spectra of hemoglobin and bilirubin have been collected and analyzed, then the Partial Least Squares (PLS) models have been built. By analyzing and comparing the Correlation and Root Mean Square Error of Prediction(RMSEP), the results show that the Correlation of bilirubin solution model is larger than that of the mixture solution added with hemoglobin, and its RMSEP value is smaller than that of mixture solution. Therefore, hemoglobin has influences on the non-invasive optical bilirubin sensing. In next step, it is necessary to investigate how to eliminate the influence.

  9. The use of compressive sensing and peak detection in the reconstruction of microtubules length time series in the process of dynamic instability.

    PubMed

    Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A

    2015-10-01

    Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Finite-error metrological bounds on multiparameter Hamiltonian estimation

    NASA Astrophysics Data System (ADS)

    Kura, Naoto; Ueda, Masahito

    2018-01-01

    Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.

  11. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  12. Detection of terrain indices related to soil salinity and mapping salt-affected soils using remote sensing and geostatistical techniques.

    PubMed

    Triki Fourati, Hela; Bouaziz, Moncef; Benzina, Mourad; Bouaziz, Samir

    2017-04-01

    Traditional surveying methods of soil properties over landscapes are dramatically cost and time-consuming. Thus, remote sensing is a proper choice for monitoring environmental problem. This research aims to study the effect of environmental factors on soil salinity and to map the spatial distribution of this salinity over the southern east part of Tunisia by means of remote sensing and geostatistical techniques. For this purpose, we used Advanced Spaceborne Thermal Emission and Reflection Radiometer data to depict geomorphological parameters: elevation, slope, plan curvature (PLC), profile curvature (PRC), and aspect. Pearson correlation between these parameters and soil electrical conductivity (EC soil ) showed that mainly slope and elevation affect the concentration of salt in soil. Moreover, spectral analysis illustrated the high potential of short-wave infrared (SWIR) bands to identify saline soils. To map soil salinity in southern Tunisia, ordinary kriging (OK), minimum distance (MD) classification, and simple regression (SR) were used. The findings showed that ordinary kriging technique provides the most reliable performances to identify and classify saline soils over the study area with a root mean square error of 1.83 and mean error of 0.018.

  13. Particle size distribution of river-suspended sediments determined by in situ measured remote-sensing reflectance.

    PubMed

    Zhang, Yuanzhi; Huang, Zhaojun; Chen, Chuqun; He, Yijun; Jiang, Tingchen

    2015-07-10

    Suspended sediments in water bodies are classified into organic and inorganic matter and have been investigated by remote-sensing technology for years. Focusing on inorganic matter, however, detailed information such as the grain size of this matter has not been provided yet. In this study, we present a new solution for estimating inorganic suspended sediments' size distribution in highly complex Case 2 waters by using a simple spectrometer sensor rather than a backscattering sensor. An experiment was carried out in the Pearl River Estuary (PRE) in the dry season to collect the remote-sensing reflectance (Rrs) and particle size distribution (PSD) of inorganic suspended sediments. Based on Mie theory, PSDs in the PRE waters were retrieved by Rrs, colored dissolved organic matter, and phytoplankton. The retrieved median diameters in 12 stations show good agreement with those of laboratory analysis at root mean square error of 2.604 μm (27.63%), bias of 1.924 μm (20.42%), and mean absolute error of 2.298 μm (24.37%). The retrieved PSDs and previous PSDs were compared, and the features of PSDs in the PRE waters were concluded.

  14. Time series forecasting using ERNN and QR based on Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  15. Predicting the composition of red wine blends using an array of multicomponent Peptide-based sensors.

    PubMed

    Ghanem, Eman; Hopfer, Helene; Navarro, Andrea; Ritzer, Maxwell S; Mahmood, Lina; Fredell, Morgan; Cubley, Ashley; Bolen, Jessica; Fattah, Rabia; Teasdale, Katherine; Lieu, Linh; Chua, Tedmund; Marini, Federico; Heymann, Hildegarde; Anslyn, Eric V

    2015-05-20

    Differential sensing using synthetic receptors as mimics of the mammalian senses of taste and smell is a powerful approach for the analysis of complex mixtures. Herein, we report on the effectiveness of a cross-reactive, supramolecular, peptide-based sensing array in differentiating and predicting the composition of red wine blends. Fifteen blends of Cabernet Sauvignon, Merlot and Cabernet Franc, in addition to the mono varietals, were used in this investigation. Linear Discriminant Analysis (LDA) showed a clear differentiation of blends based on tannin concentration and composition where certain mono varietals like Cabernet Sauvignon seemed to contribute less to the overall characteristics of the blend. Partial Least Squares (PLS) Regression and cross validation were used to build a predictive model for the responses of the receptors to eleven binary blends and the three mono varietals. The optimized model was later used to predict the percentage of each mono varietal in an independent test set composted of four tri-blends with a 15% average error. A partial least square regression model using the mouth-feel and taste descriptive sensory attributes of the wine blends revealed a strong correlation of the receptors to perceived astringency, which is indicative of selective binding to polyphenols in wine.

  16. iHWG-μNIR: a miniaturised near-infrared gas sensor based on substrate-integrated hollow waveguides coupled to a micro-NIR-spectrophotometer.

    PubMed

    Rohwedder, J J R; Pasquini, C; Fortes, P R; Raimundo, I M; Wilk, A; Mizaikoff, B

    2014-07-21

    A miniaturised gas analyser is described and evaluated based on the use of a substrate-integrated hollow waveguide (iHWG) coupled to a microsized near-infrared spectrophotometer comprising a linear variable filter and an array of InGaAs detectors. This gas sensing system was applied to analyse surrogate samples of natural fuel gas containing methane, ethane, propane and butane, quantified by using multivariate regression models based on partial least square (PLS) algorithms and Savitzky-Golay 1(st) derivative data preprocessing. The external validation of the obtained models reveals root mean square errors of prediction of 0.37, 0.36, 0.67 and 0.37% (v/v), for methane, ethane, propane and butane, respectively. The developed sensing system provides particularly rapid response times upon composition changes of the gaseous sample (approximately 2 s) due the minute volume of the iHWG-based measurement cell. The sensing system developed in this study is fully portable with a hand-held sized analyser footprint, and thus ideally suited for field analysis. Last but not least, the obtained results corroborate the potential of NIR-iHWG analysers for monitoring the quality of natural gas and petrochemical gaseous products.

  17. Skin Cooling and Force Replication at the Ankle in Healthy Individuals: A Crossover Randomized Controlled Trial

    PubMed Central

    Haupenthal, Daniela Pacheco dos Santos; de Noronha, Marcos; Haupenthal, Alessandro; Ruschel, Caroline; Nunes, Guilherme S.

    2015-01-01

    Context Proprioception of the ankle is determined by the ability to perceive the sense of position of the ankle structures, as well as the speed and direction of movement. Few researchers have investigated proprioception by force-replication ability and particularly after skin cooling. Objective To analyze the ability of the ankle-dorsiflexor muscles to replicate isometric force after a period of skin cooling. Design Randomized controlled clinical trial. Setting Laboratory. Patients or Other Participants Twenty healthy individuals (10 men, 10 women; age = 26.8 ± 5.2 years, height = 171 ± 7 cm, mass = 66.8 ± 10.5 kg). Intervention(s) Skin cooling was carried out using 2 ice applications: (1) after maximal voluntary isometric contraction (MVIC) performance and before data collection for the first target force, maintained for 20 minutes; and (2) before data collection for the second target force, maintained for 10 minutes. We measured skin temperature before and after ice applications to ensure skin cooling. Main Outcome Measure(s) A load cell was placed under an inclined board for data collection, and 10 attempts of force replication were carried out for 2 values of MVIC (20%, 50%) in each condition (ice, no ice). We assessed force sense with absolute and root mean square errors (the difference between the force developed by the dorsiflexors and the target force measured with the raw data and after root mean square analysis, respectively) and variable error (the variance around the mean absolute error score). A repeated-measures multivariate analysis of variance was used for statistical analysis. Results The absolute error was greater for the ice than for the no-ice condition (F1,19 = 9.05, P = .007) and for the target force at 50% of MVIC than at 20% of MVIC (F1,19 = 26.01, P < .001). Conclusions The error was greater in the ice condition and at 50% of MVIC. Skin cooling reduced the proprioceptive ability of the ankle-dorsiflexor muscles to replicate isometric force. PMID:25761136

  18. System identification of a small low-cost unmanned aerial vehicle using flight data from low-cost sensors

    NASA Astrophysics Data System (ADS)

    Hoffer, Nathan Von

    Remote sensing has traditionally been done with satellites and manned aircraft. While. these methods can yield useful scientificc data, satellites and manned aircraft have limitations in data frequency, process time, and real time re-tasking. Small low-cost unmanned aerial vehicles (UAVs) provide greater possibilities for personal scientic research than traditional remote sensing platforms. Precision aerial data requires an accurate vehicle dynamics model for controller development, robust flight characteristics, and fault tolerance. One method of developing a model is system identification (system ID). In this thesis system ID of a small low-cost fixed-wing T-tail UAV is conducted. The linerized longitudinal equations of motion are derived from first principles. Foundations of Recursive Least Squares (RLS) are presented along with RLS with an Error Filtering Online Learning scheme (EFOL). Sensors, data collection, data consistency checking, and data processing are described. Batch least squares (BLS) and BLS with EFOL are used to identify aerodynamic coecoefficients of the UAV. Results of these two methods with flight data are discussed.

  19. Hyperspectral Analysis of Soil Total Nitrogen in Subsided Land Using the Local Correlation Maximization-Complementary Superiority (LCMCS) Method.

    PubMed

    Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu

    2015-07-23

    The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]'), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal.

  20. Stereo pair design for cameras with a fovea

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  1. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  2. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  3. Comparison of laser ray-tracing and skiascopic ocular wavefront-sensing devices

    PubMed Central

    Bartsch, D-UG; Bessho, K; Gomez, L; Freeman, WR

    2009-01-01

    Purpose To compare two wavefront-sensing devices based on different principles. Methods Thirty-eight healthy eyes of 19 patients were measured five times in the reproducibility study. Twenty eyes of 10 patients were measured in the comparison study. The Tracey Visual Function Analyzer (VFA), based on the ray-tracing principle and the Nidek optical pathway difference (OPD)-Scan, based on the dynamic skiascopy principle were compared. Standard deviation (SD) of root mean square (RMS) errors was compared to verify the reproducibility. We evaluated RMS errors, Zernike terms and conventional refractive indexes (Sph, Cyl, Ax, and spherical equivalent). Results In RMS errors reading, both devices showed similar ratios of SD to the mean measurement value (VFA: 57.5±11.7%, OPD-Scan: 53.9±10.9%). Comparison on the same eye showed that almost all terms were significantly greater using the VFA than using the OPD-Scan. However, certain high spatial frequency aberrations (tetrafoil, pentafoil, and hexafoil) were consistently measured near zero with the OPD-Scan. Conclusion Both devices showed similar level of reproducibility; however, there was considerable difference in the wavefront reading between machines when measuring the same eye. Differences in the number of sample points, centration, and measurement algorithms between the two instruments may explain our results. PMID:17571088

  4. An Improved Compressive Sensing and Received Signal Strength-Based Target Localization Algorithm with Unknown Target Population for Wireless Local Area Networks.

    PubMed

    Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang

    2017-05-30

    In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.

  5. An analytical model for regular respiratory signals derived from the probability density function of Rayleigh distribution.

    PubMed

    Li, Xin; Li, Ye

    2015-01-01

    Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.

  6. i-LOVE: ISS-JEM lidar for observation of vegetation environment

    NASA Astrophysics Data System (ADS)

    Asai, Kazuhiro; Sawada, Haruo; Sugimoto, Nobuo; Mizutani, Kohei; Ishii, Shoken; Nishizawa, Tomoaki; Shimoda, Haruhisa; Honda, Yoshiaki; Kajiwara, Koji; Takao, Gen; Hirata, Yasumasa; Saigusa, Nobuko; Hayashi, Masatomo; Oguma, Hiroyuki; Saito, Hideki; Awaya, Yoshio; Endo, Takahiro; Imai, Tadashi; Murooka, Jumpei; Kobatashi, Takashi; Suzuki, Keiko; Sato, Ryota

    2012-11-01

    It is very important to watch the spatial distribution of vegetation biomass and changes in biomass over time, representing invaluable information to improve present assessments and future projections of the terrestrial carbon cycle. A space lidar is well known as a powerful remote sensing technology for measuring the canopy height accurately. This paper describes the ISS(International Space Station)-JEM(Japanese Experimental Module)-EF(Exposed Facility) borne vegetation lidar using a two dimensional array detector in order to reduce the root mean square error (RMSE) of tree height due to sloped surface.

  7. Validation plays the role of a "bridge" in connecting remote sensing research and applications

    NASA Astrophysics Data System (ADS)

    Wang, Zhiqiang; Deng, Ying; Fan, Yida

    2018-07-01

    Remote sensing products contribute to improving earth observations over space and time. Uncertainties exist in products of different levels; thus, validation of these products before and during their applications is critical. This study discusses the meaning of validation in depth and proposes a new definition of reliability for use with such products. In this context, validation should include three aspects: a description of the relevant uncertainties, quantitative measurement results and a qualitative judgment that considers the needs of users. A literature overview is then presented evidencing improvements in the concepts associated with validation. It shows that the root mean squared error (RMSE) is widely used to express accuracy; increasing numbers of remote sensing products have been validated; research institutes contribute most validation efforts; and sufficient validation studies encourage the application of remote sensing products. Validation plays a connecting role in the distribution and application of remote sensing products. Validation connects simple remote sensing subjects with other disciplines, and it connects primary research with practical applications. Based on the above findings, it is suggested that validation efforts that include wider cooperation among research institutes and full consideration of the needs of users should be promoted.

  8. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  9. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  10. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  11. Identification and Severity Determination of Wheat Stripe Rust and Wheat Leaf Rust Based on Hyperspectral Data Acquired Using a Black-Paper-Based Measuring Method.

    PubMed

    Wang, Hui; Qin, Feng; Ruan, Liu; Wang, Rui; Liu, Qi; Ma, Zhanhong; Li, Xiaolong; Cheng, Pei; Wang, Haiguang

    2016-01-01

    It is important to implement detection and assessment of plant diseases based on remotely sensed data for disease monitoring and control. Hyperspectral data of healthy leaves, leaves in incubation period and leaves in diseased period of wheat stripe rust and wheat leaf rust were collected under in-field conditions using a black-paper-based measuring method developed in this study. After data preprocessing, the models to identify the diseases were built using distinguished partial least squares (DPLS) and support vector machine (SVM), and the disease severity inversion models of stripe rust and the disease severity inversion models of leaf rust were built using quantitative partial least squares (QPLS) and support vector regression (SVR). All the models were validated by using leave-one-out cross validation and external validation. The diseases could be discriminated using both distinguished partial least squares and support vector machine with the accuracies of more than 99%. For each wheat rust, disease severity levels were accurately retrieved using both the optimal QPLS models and the optimal SVR models with the coefficients of determination (R2) of more than 0.90 and the root mean square errors (RMSE) of less than 0.15. The results demonstrated that identification and severity evaluation of stripe rust and leaf rust at the leaf level could be implemented based on the hyperspectral data acquired using the developed method. A scientific basis was provided for implementing disease monitoring by using aerial and space remote sensing technologies.

  12. Identification and Severity Determination of Wheat Stripe Rust and Wheat Leaf Rust Based on Hyperspectral Data Acquired Using a Black-Paper-Based Measuring Method

    PubMed Central

    Ruan, Liu; Wang, Rui; Liu, Qi; Ma, Zhanhong; Li, Xiaolong; Cheng, Pei; Wang, Haiguang

    2016-01-01

    It is important to implement detection and assessment of plant diseases based on remotely sensed data for disease monitoring and control. Hyperspectral data of healthy leaves, leaves in incubation period and leaves in diseased period of wheat stripe rust and wheat leaf rust were collected under in-field conditions using a black-paper-based measuring method developed in this study. After data preprocessing, the models to identify the diseases were built using distinguished partial least squares (DPLS) and support vector machine (SVM), and the disease severity inversion models of stripe rust and the disease severity inversion models of leaf rust were built using quantitative partial least squares (QPLS) and support vector regression (SVR). All the models were validated by using leave-one-out cross validation and external validation. The diseases could be discriminated using both distinguished partial least squares and support vector machine with the accuracies of more than 99%. For each wheat rust, disease severity levels were accurately retrieved using both the optimal QPLS models and the optimal SVR models with the coefficients of determination (R2) of more than 0.90 and the root mean square errors (RMSE) of less than 0.15. The results demonstrated that identification and severity evaluation of stripe rust and leaf rust at the leaf level could be implemented based on the hyperspectral data acquired using the developed method. A scientific basis was provided for implementing disease monitoring by using aerial and space remote sensing technologies. PMID:27128464

  13. Greedy algorithms for diffuse optical tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of computational efficiency. The main advantage of this study is that the forward diffusion equation solver need not be repeatedly solved.

  14. A New Take on an Old Square

    ERIC Educational Resources Information Center

    Richardson, Janessa; Bachman, Rachel M.

    2017-01-01

    This article describes a preservice teacher's imaginative exploration of completing the square through a process of reasoning and sense making. She recounts historical perspectives and her own discoveries in the process of completing the square. Through this process of sense making, she engaged with the content standard of completing the square to…

  15. Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons

    DTIC Science & Technology

    1989-03-25

    error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given

  16. Hyperspectral Analysis of Soil Total Nitrogen in Subsided Land Using the Local Correlation Maximization-Complementary Superiority (LCMCS) Method

    PubMed Central

    Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu

    2015-01-01

    The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]′), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal. PMID:26213935

  17. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  18. Estimation of Finger Joint Angles Based on Electromechanical Sensing of Wrist Shape.

    PubMed

    Kawaguchi, Junki; Yoshimoto, Shunsuke; Kuroda, Yoshihiro; Oshiro, Osamu

    2017-09-01

    An approach to finger motion capture that places fewer restrictions on the usage environment and actions of the user is an important research topic in biomechanics and human-computer interaction. We proposed a system that electrically detects finger motion from the associated deformation of the wrist and estimates the finger joint angles using multiple regression models. A wrist-mounted sensing device with 16 electrodes detects deformation of the wrist from changes in electrical contact resistance at the skin. In this study, we experimentally investigated the accuracy of finger joint angle estimation, the adequacy of two multiple regression models, and the resolution of the estimation of total finger joint angles. In experiments, both the finger joint angles and the system output voltage were recorded as subjects performed flexion/extension of the fingers. These data were used for calibration using the least-squares method. The system was found to be capable of estimating the total finger joint angle with a root-mean-square error of 29-34 degrees. A multiple regression model with a second-order polynomial basis function was shown to be suitable for the estimation of all total finger joint angles, but not those of the thumb.

  19. Linking Field and Satellite Observations to Reveal Differences in Single vs. Double-Cropped Soybean Yields in Central Brazil

    NASA Astrophysics Data System (ADS)

    Jeffries, G. R.; Cohn, A.

    2016-12-01

    Soy-corn double cropping (DC) has been widely adopted in Central Brazil alongside single cropped (SC) soybean production. DC involves different cropping calendars, soy varieties, and may be associated with different crop yield patterns and volatility than SC. Study of the performance of the region's agriculture in a changing climate depends on tracking differences in the productivity of SC vs. DC, but has been limited by crop yield data that conflate the two systems. We predicted SC and DC yields across Central Brazil, drawing on field observations and remotely sensed data. We first modeled field yield estimates as a function of remotely sensed DC status and vegetation index (VI) metrics, and other management and biophysical factors. We then used the statistical model estimated to predict SC and DC soybean yields at each 500 m2 grid cell of Central Brazil for harvest years 2001 - 2015. The yield estimation model was constructed using 1) a repeated cross-sectional survey of soybean yields and management factors for years 2007-2015, 2) a custom agricultural land cover classification dataset which assimilates earlier datasets for the region, and 3) 500m 8-day MODIS image composites used to calculate the wide dynamic range vegetation index (WDRVI) and derivative metrics such as area under the curve for WDRVI values in critical crop development periods. A statistical yield estimation model which primarily entails WDRVI metrics, DC status, and spatial fixed effects was developed on a subset of the yield dataset. Model validation was conducted by predicting previously withheld yield records, and then assessing error and goodness-of-fit for predicted values with metrics including root mean squared error (RMSE), mean squared error (MSE), and R2. We found a statistical yield estimation model which incorporates WDRVI and DC status to be way to estimate crop yields over the region. Statistical properties of the resulting gridded yield dataset may be valuable for understanding linkages between crop yields, farm management factors, and climate.

  20. Construction of an unmanned aerial vehicle remote sensing system for crop monitoring

    NASA Astrophysics Data System (ADS)

    Jeong, Seungtaek; Ko, Jonghan; Kim, Mijeong; Kim, Jongkwon

    2016-04-01

    We constructed a lightweight unmanned aerial vehicle (UAV) remote sensing system and determined the ideal method for equipment setup, image acquisition, and image processing. Fields of rice paddy (Oryza sativa cv. Unkwang) grown under three different nitrogen (N) treatments of 0, 50, or 115 kg/ha were monitored at Chonnam National University, Gwangju, Republic of Korea, in 2013. A multispectral camera was used to acquire UAV images from the study site. Atmospheric correction of these images was completed using the empirical line method, and three-point (black, gray, and white) calibration boards were used as pseudo references. Evaluation of our corrected UAV-based remote sensing data revealed that correction efficiency and root mean square errors ranged from 0.77 to 0.95 and 0.01 to 0.05, respectively. The time series maps of simulated normalized difference vegetation index (NDVI) produced using the UAV images reproduced field variations of NDVI reasonably well, both within and between the different N treatments. We concluded that the UAV-based remote sensing technology utilized in this study is potentially an easy and simple way to quantitatively obtain reliable two-dimensional remote sensing information on crop growth.

  1. Retrieving background surface reflectance of Himawari-8/AHI using BRDF modeling

    NASA Astrophysics Data System (ADS)

    Choi, Sungwon; Seo, Minji; Lee, Kyeong-sang; Han, Kyung-soo

    2017-04-01

    In these days, remote sensing is more important than past. And retrieving surface reflectance in remote sensing is also important. So there are many ways to retrieve surface reflectance by my countries with polar orbit and geostationary satellite. We studied Bidirectional Reflectance Distribution Function (BRDF) which is used to retrieve surface reflectance. In BRDF equation, we calculate surface reflectance using BRD components and angular data. BRD components are to calculate 3 of scatterings, isotropic geometric and volumetric scattering. To make Background Surface Reflectance (BSR) of Himawari-8/AHI. We used 5 bands (band1, band2, band3, band4, band5) with BRDF. And we made 5 BSR for 5 channels. For validation, we compare BSR with Top of canopy (TOC) reflectance of AHI. As a result, bias are from -0.00223 to 0.008328 and Root Mean Square Error (RMSE) are from 0.045 to 0.049. We think BSR can be used to replace TOC reflectance in remote sensing to improve weakness of TOC reflectance.

  2. Improving Multidimensional Wireless Sensor Network Lifetime Using Pearson Correlation and Fractal Clustering

    PubMed Central

    Almeida, Fernando R.; Brayner, Angelo; Rodrigues, Joel J. P. C.; Maia, Jose E. Bessa

    2017-01-01

    An efficient strategy for reducing message transmission in a wireless sensor network (WSN) is to group sensors by means of an abstraction denoted cluster. The key idea behind the cluster formation process is to identify a set of sensors whose sensed values present some data correlation. Nowadays, sensors are able to simultaneously sense multiple different physical phenomena, yielding in this way multidimensional data. This paper presents three methods for clustering sensors in WSNs whose sensors collect multidimensional data. The proposed approaches implement the concept of multidimensional behavioral clustering. To show the benefits introduced by the proposed methods, a prototype has been implemented and experiments have been carried out on real data. The results prove that the proposed methods decrease the amount of data flowing in the network and present low root-mean-square error (RMSE). PMID:28590450

  3. Improving Multidimensional Wireless Sensor Network Lifetime Using Pearson Correlation and Fractal Clustering.

    PubMed

    Almeida, Fernando R; Brayner, Angelo; Rodrigues, Joel J P C; Maia, Jose E Bessa

    2017-06-07

    An efficient strategy for reducing message transmission in a wireless sensor network (WSN) is to group sensors by means of an abstraction denoted cluster. The key idea behind the cluster formation process is to identify a set of sensors whose sensed values present some data correlation. Nowadays, sensors are able to simultaneously sense multiple different physical phenomena, yielding in this way multidimensional data. This paper presents three methods for clustering sensors in WSNs whose sensors collect multidimensional data. The proposed approaches implement the concept of multidimensional behavioral clustering . To show the benefits introduced by the proposed methods, a prototype has been implemented and experiments have been carried out on real data. The results prove that the proposed methods decrease the amount of data flowing in the network and present low root-mean-square error (RMSE).

  4. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  5. Aerodynamic influence coefficient method using singularity splines.

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1973-01-01

    A new numerical formulation with computed results, is presented. This formulation combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of the loading function methods. The formulation employs a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfies all of the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise (termed 'spline'). Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral.

  6. Space-Time Data Fusion

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, Hai; Olsen, Edward; Cressie, Noel

    2011-01-01

    Space-time Data Fusion (STDF) is a methodology for combing heterogeneous remote sensing data to optimally estimate the true values of a geophysical field of interest, and obtain uncertainties for those estimates. The input data sets may have different observing characteristics including different footprints, spatial resolutions and fields of view, orbit cycles, biases, and noise characteristics. Despite these differences all observed data can be linked to the underlying field, and therefore the each other, by a statistical model. Differences in footprints and other geometric characteristics are accounted for by parameterizing pixel-level remote sensing observations as spatial integrals of true field values lying within pixel boundaries, plus measurement error. Both spatial and temporal correlations in the true field and in the observations are estimated and incorporated through the use of a space-time random effects (STRE) model. Once the models parameters are estimated, we use it to derive expressions for optimal (minimum mean squared error and unbiased) estimates of the true field at any arbitrary location of interest, computed from the observations. Standard errors of these estimates are also produced, allowing confidence intervals to be constructed. The procedure is carried out on a fine spatial grid to approximate a continuous field. We demonstrate STDF by applying it to the problem of estimating CO2 concentration in the lower-atmosphere using data from the Atmospheric Infrared Sounder (AIRS) and the Japanese Greenhouse Gasses Observing Satellite (GOSAT) over one year for the continental US.

  7. Long-term surface EMG monitoring using K-means clustering and compressive sensing

    NASA Astrophysics Data System (ADS)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    In this work, we present an advanced K-means clustering algorithm based on Compressed Sensing theory (CS) in combination with the K-Singular Value Decomposition (K-SVD) method for Clustering of long-term recording of surface Electromyography (sEMG) signals. The long-term monitoring of sEMG signals aims at recording of the electrical activity produced by muscles which are very useful procedure for treatment and diagnostic purposes as well as for detection of various pathologies. The proposed algorithm is examined for three scenarios of sEMG signals including healthy person (sEMG-Healthy), a patient with myopathy (sEMG-Myopathy), and a patient with neuropathy (sEMG-Neuropathr), respectively. The proposed algorithm can easily scan large sEMG datasets of long-term sEMG recording. We test the proposed algorithm with Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) dimensionality reduction methods. Then, the output of the proposed algorithm is fed to K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers in order to calclute the clustering performance. The proposed algorithm achieves a classification accuracy of 99.22%. This ability allows reducing 17% of Average Classification Error (ACE), 9% of Training Error (TE), and 18% of Root Mean Square Error (RMSE). The proposed algorithm also reduces 14% clustering energy consumption compared to the existing K-Means clustering algorithm.

  8. How Well Will MODIS Measure Top of Atmosphere Aerosol Direct Radiative Forcing?

    NASA Technical Reports Server (NTRS)

    Remer, Lorraine A.; Kaufman, Yoram J.; Levin, Zev; Ghan, Stephen; Einaudi, Franco (Technical Monitor)

    2000-01-01

    The new generation of satellite sensors such as the Moderate Resolution Imaging Spectroradiometer (MODIS) will be able to detect and characterize global aerosols with an unprecedented accuracy. The question remains whether this accuracy will be sufficient to narrow the uncertainties in our estimates of aerosol radiative forcing at the top of the atmosphere. Satellite remote sensing detects aerosol optical thickness with the least amount of relative error when aerosol loading is high. Satellites are less effective when aerosol loading is low. We use the monthly mean results of two global aerosol transport models to simulate the spatial distribution of smoke aerosol in the Southern Hemisphere during the tropical biomass burning season. This spatial distribution allows us to determine that 87-94% of the smoke aerosol forcing at the top of the atmosphere occurs in grid squares with sufficient signal to noise ratio to be detectable from space. The uncertainty of quantifying the smoke aerosol forcing in the Southern Hemisphere depends on the uncertainty introduced by errors in estimating the background aerosol, errors resulting from uncertainties in surface properties and errors resulting from uncertainties in assumptions of aerosol properties. These three errors combine to give overall uncertainties of 1.5 to 2.2 Wm-2 (21-56%) in determining the Southern Hemisphere smoke aerosol forcing at the top of the atmosphere. The range of values depend on which estimate of MODIS retrieval uncertainty is used, either the theoretical calculation (upper bound) or the empirical estimate (lower bound). Strategies that use the satellite data to derive flux directly or use the data in conjunction with ground-based remote sensing and aerosol transport models can reduce these uncertainties.

  9. Quadratic correlation filters for optical correlators

    NASA Astrophysics Data System (ADS)

    Mahalanobis, Abhijit; Muise, Robert R.; Vijaya Kumar, Bhagavatula V. K.

    2003-08-01

    Linear correlation filters have been implemented in optical correlators and successfully used for a variety of applications. The output of an optical correlator is usually sensed using a square law device (such as a CCD array) which forces the output to be the squared magnitude of the desired correlation. It is however not a traditional practice to factor the effect of the square-law detector in the design of the linear correlation filters. In fact, the input-output relationship of an optical correlator is more accurately modeled as a quadratic operation than a linear operation. Quadratic correlation filters (QCFs) operate directly on the image data without the need for feature extraction or segmentation. In this sense, the QCFs retain the main advantages of conventional linear correlation filters while offering significant improvements in other respects. Not only is more processing required to detect peaks in the outputs of multiple linear filters, but choosing a winner among them is an error prone task. In contrast, all channels in a QCF work together to optimize the same performance metric and produce a combined output that leads to considerable simplification of the post-processing. In this paper, we propose a novel approach to the design of quadratic correlation based on the Fukunaga Koontz transform. Although quadratic filters are known to be optimum when the data is Gaussian, it is expected that they will perform as well as or better than linear filters in general. Preliminary performance results are provided that show that quadratic correlation filters perform better than their linear counterparts.

  10. Gyro Drift Correction for An Indirect Kalman Filter Based Sensor Fusion Driver.

    PubMed

    Lee, Chan-Gun; Dao, Nhu-Ngoc; Jang, Seonmin; Kim, Deokhwan; Kim, Yonghun; Cho, Sungrae

    2016-06-11

    Sensor fusion techniques have made a significant contribution to the success of the recently emerging mobile applications era because a variety of mobile applications operate based on multi-sensing information from the surrounding environment, such as navigation systems, fitness trackers, interactive virtual reality games, etc. For these applications, the accuracy of sensing information plays an important role to improve the user experience (UX) quality, especially with gyroscopes and accelerometers. Therefore, in this paper, we proposed a novel mechanism to resolve the gyro drift problem, which negatively affects the accuracy of orientation computations in the indirect Kalman filter based sensor fusion. Our mechanism focuses on addressing the issues of external feedback loops and non-gyro error elements contained in the state vectors of an indirect Kalman filter. Moreover, the mechanism is implemented in the device-driver layer, providing lower process latency and transparency capabilities for the upper applications. These advances are relevant to millions of legacy applications since utilizing our mechanism does not require the existing applications to be re-programmed. The experimental results show that the root mean square errors (RMSE) before and after applying our mechanism are significantly reduced from 6.3 × 10(-1) to 5.3 × 10(-7), respectively.

  11. Look-up-table approach for leaf area index retrieval from remotely sensed data based on scale information

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaohua; Li, Chuanrong; Tang, Lingli

    2018-03-01

    Leaf area index (LAI) is a key structural characteristic of vegetation and plays a significant role in global change research. Several methods and remotely sensed data have been evaluated for LAI estimation. This study aimed to evaluate the suitability of the look-up-table (LUT) approach for crop LAI retrieval from Satellite Pour l'Observation de la Terre (SPOT)-5 data and establish an LUT approach for LAI inversion based on scale information. The LAI inversion result was validated by in situ LAI measurements, indicating that the LUT generated based on the PROSAIL (PROSPECT+SAIL: properties spectra + scattering by arbitrarily inclined leaves) model was suitable for crop LAI estimation, with a root mean square error (RMSE) of ˜0.31m2 / m2 and determination coefficient (R2) of 0.65. The scale effect of crop LAI was analyzed based on Taylor expansion theory, indicating that when the SPOT data aggregated by 200 × 200 pixel, the relative error is significant with 13.7%. Finally, an LUT method integrated with scale information was proposed in this article, improving the inversion accuracy with RMSE of 0.20 m2 / m2 and R2 of 0.83.

  12. Response Surface Analysis of Experiments with Random Blocks

    DTIC Science & Technology

    1988-09-01

    partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF

  13. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  14. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  15. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles

    PubMed Central

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-01-01

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional–integral–derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle. PMID:27110793

  16. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.

    PubMed

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-04-22

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

  17. A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Louis A; Mason, John J.

    We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less

  18. Error propagation of partial least squares for parameters optimization in NIR modeling.

    PubMed

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  19. Error propagation of partial least squares for parameters optimization in NIR modeling

    NASA Astrophysics Data System (ADS)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  20. Design and Test of a Soil Profile Moisture Sensor Based on Sensitive Soil Layers

    PubMed Central

    Liu, Cheng; Qian, Hongzhou; Cao, Weixing; Ni, Jun

    2018-01-01

    To meet the demand of intelligent irrigation for accurate moisture sensing in the soil vertical profile, a soil profile moisture sensor was designed based on the principle of high-frequency capacitance. The sensor consists of five groups of sensing probes, a data processor, and some accessory components. Low-resistivity copper rings were used as components of the sensing probes. Composable simulation of the sensor’s sensing probes was carried out using a high-frequency structure simulator. According to the effective radiation range of electric field intensity, width and spacing of copper ring were set to 30 mm and 40 mm, respectively. A parallel resonance circuit of voltage-controlled oscillator and high-frequency inductance-capacitance (LC) was designed for signal frequency division and conditioning. A data processor was used to process moisture-related frequency signals for soil profile moisture sensing. The sensor was able to detect real-time soil moisture at the depths of 20, 30, and 50 cm and conduct online inversion of moisture in the soil layer between 0–100 cm. According to the calibration results, the degree of fitting (R2) between the sensor’s measuring frequency and the volumetric moisture content of soil sample was 0.99 and the relative error of the sensor consistency test was 0–1.17%. Field tests in different loam soils showed that measured soil moisture from our sensor reproduced the observed soil moisture dynamic well, with an R2 of 0.96 and a root mean square error of 0.04. In a sensor accuracy test, the R2 between the measured value of the proposed sensor and that of the Diviner2000 portable soil moisture monitoring system was higher than 0.85, with a relative error smaller than 5%. The R2 between measured values and inversed soil moisture values for other soil layers were consistently higher than 0.8. According to calibration test and field test, this sensor, which features low cost, good operability, and high integration, is qualified for precise agricultural irrigation with stable performance and high accuracy. PMID:29883420

  1. Design and Test of a Soil Profile Moisture Sensor Based on Sensitive Soil Layers.

    PubMed

    Gao, Zhenran; Zhu, Yan; Liu, Cheng; Qian, Hongzhou; Cao, Weixing; Ni, Jun

    2018-05-21

    To meet the demand of intelligent irrigation for accurate moisture sensing in the soil vertical profile, a soil profile moisture sensor was designed based on the principle of high-frequency capacitance. The sensor consists of five groups of sensing probes, a data processor, and some accessory components. Low-resistivity copper rings were used as components of the sensing probes. Composable simulation of the sensor’s sensing probes was carried out using a high-frequency structure simulator. According to the effective radiation range of electric field intensity, width and spacing of copper ring were set to 30 mm and 40 mm, respectively. A parallel resonance circuit of voltage-controlled oscillator and high-frequency inductance-capacitance (LC) was designed for signal frequency division and conditioning. A data processor was used to process moisture-related frequency signals for soil profile moisture sensing. The sensor was able to detect real-time soil moisture at the depths of 20, 30, and 50 cm and conduct online inversion of moisture in the soil layer between 0⁻100 cm. According to the calibration results, the degree of fitting ( R ²) between the sensor’s measuring frequency and the volumetric moisture content of soil sample was 0.99 and the relative error of the sensor consistency test was 0⁻1.17%. Field tests in different loam soils showed that measured soil moisture from our sensor reproduced the observed soil moisture dynamic well, with an R ² of 0.96 and a root mean square error of 0.04. In a sensor accuracy test, the R ² between the measured value of the proposed sensor and that of the Diviner2000 portable soil moisture monitoring system was higher than 0.85, with a relative error smaller than 5%. The R ² between measured values and inversed soil moisture values for other soil layers were consistently higher than 0.8. According to calibration test and field test, this sensor, which features low cost, good operability, and high integration, is qualified for precise agricultural irrigation with stable performance and high accuracy.

  2. Quantitative analysis of the radiation error for aerial coiled-fiber-optic distributed temperature sensing deployments using reinforcing fabric as support structure

    NASA Astrophysics Data System (ADS)

    Sigmund, Armin; Pfister, Lena; Sayde, Chadi; Thomas, Christoph K.

    2017-06-01

    In recent years, the spatial resolution of fiber-optic distributed temperature sensing (DTS) has been enhanced in various studies by helically coiling the fiber around a support structure. While solid polyvinyl chloride tubes are an appropriate support structure under water, they can produce considerable errors in aerial deployments due to the radiative heating or cooling. We used meshed reinforcing fabric as a novel support structure to measure high-resolution vertical temperature profiles with a height of several meters above a meadow and within and above a small lake. This study aimed at quantifying the radiation error for the coiled DTS system and the contribution caused by the novel support structure via heat conduction. A quantitative and comprehensive energy balance model is proposed and tested, which includes the shortwave radiative, longwave radiative, convective, and conductive heat transfers and allows for modeling fiber temperatures as well as quantifying the radiation error. The sensitivity of the energy balance model to the conduction error caused by the reinforcing fabric is discussed in terms of its albedo, emissivity, and thermal conductivity. Modeled radiation errors amounted to -1.0 and 1.3 K at 2 m height but ranged up to 2.8 K for very high incoming shortwave radiation (1000 J s-1 m-2) and very weak winds (0.1 m s-1). After correcting for the radiation error by means of the presented energy balance, the root mean square error between DTS and reference air temperatures from an aspirated resistance thermometer or an ultrasonic anemometer was 0.42 and 0.26 K above the meadow and the lake, respectively. Conduction between reinforcing fabric and fiber cable had a small effect on fiber temperatures (< 0.18 K). Only for locations where the plastic rings that supported the reinforcing fabric touched the fiber-optic cable were significant temperature artifacts of up to 2.5 K observed. Overall, the reinforcing fabric offers several advantages over conventional support structures published to date in the literature as it minimizes both radiation and conduction errors.

  3. Single image non-uniformity correction using compressive sensing

    NASA Astrophysics Data System (ADS)

    Jian, Xian-zhong; Lu, Rui-zhi; Guo, Qiang; Wang, Gui-pu

    2016-05-01

    A non-uniformity correction (NUC) method for an infrared focal plane array imaging system was proposed. The algorithm, based on compressive sensing (CS) of single image, overcame the disadvantages of "ghost artifacts" and bulk calculating costs in traditional NUC algorithms. A point-sampling matrix was designed to validate the measurements of CS on the time domain. The measurements were corrected using the midway infrared equalization algorithm, and the missing pixels were solved with the regularized orthogonal matching pursuit algorithm. Experimental results showed that the proposed method can reconstruct the entire image with only 25% pixels. A small difference was found between the correction results using 100% pixels and the reconstruction results using 40% pixels. Evaluation of the proposed method on the basis of the root-mean-square error, peak signal-to-noise ratio, and roughness index (ρ) proved the method to be robust and highly applicable.

  4. Position error compensation via a variable reluctance sensor applied to a Hybrid Vehicle Electric machine.

    PubMed

    Bucak, Ihsan Ömür

    2010-01-01

    In the automotive industry, electromagnetic variable reluctance (VR) sensors have been extensively used to measure engine position and speed through a toothed wheel mounted on the crankshaft. In this work, an application that already uses the VR sensing unit for engine and/or transmission has been chosen to infer, this time, the indirect position of the electric machine in a parallel Hybrid Electric Vehicle (HEV) system. A VR sensor has been chosen to correct the position of the electric machine, mainly because it may still become critical in the operation of HEVs to avoid possible vehicle failures during the start-up and on-the-road, especially when the machine is used with an internal combustion engine. The proposed method uses Chi-square test and is adaptive in a sense that it derives the compensation factors during the shaft operation and updates them in a timely fashion.

  5. Position Error Compensation via a Variable Reluctance Sensor Applied to a Hybrid Vehicle Electric Machine

    PubMed Central

    Bucak, İhsan Ömür

    2010-01-01

    In the automotive industry, electromagnetic variable reluctance (VR) sensors have been extensively used to measure engine position and speed through a toothed wheel mounted on the crankshaft. In this work, an application that already uses the VR sensing unit for engine and/or transmission has been chosen to infer, this time, the indirect position of the electric machine in a parallel Hybrid Electric Vehicle (HEV) system. A VR sensor has been chosen to correct the position of the electric machine, mainly because it may still become critical in the operation of HEVs to avoid possible vehicle failures during the start-up and on-the-road, especially when the machine is used with an internal combustion engine. The proposed method uses Chi-square test and is adaptive in a sense that it derives the compensation factors during the shaft operation and updates them in a timely fashion. PMID:22294906

  6. Compressed-sensing wavenumber-scanning interferometry

    NASA Astrophysics Data System (ADS)

    Bai, Yulei; Zhou, Yanzhou; He, Zhaoshui; Ye, Shuangli; Dong, Bo; Xie, Shengli

    2018-01-01

    The Fourier transform (FT), the nonlinear least-squares algorithm (NLSA), and eigenvalue decomposition algorithm (EDA) are used to evaluate the phase field in depth-resolved wavenumber-scanning interferometry (DRWSI). However, because the wavenumber series of the laser's output is usually accompanied by nonlinearity and mode-hop, FT, NLSA, and EDA, which are only suitable for equidistant interference data, often lead to non-negligible phase errors. In this work, a compressed-sensing method for DRWSI (CS-DRWSI) is proposed to resolve this problem. By using the randomly spaced inverse Fourier matrix and solving the underdetermined equation in the wavenumber domain, CS-DRWSI determines the nonuniform sampling and spectral leakage of the interference spectrum. Furthermore, it can evaluate interference data without prior knowledge of the object. The experimental results show that CS-DRWSI improves the depth resolution and suppresses sidelobes. It can replace the FT as a standard algorithm for DRWSI.

  7. Bathymetric mapping of shallow water surrounding Dongsha Island using QuickBird image

    NASA Astrophysics Data System (ADS)

    Li, Dongling; Zhang, Huaguo; Lou, Xiulin

    2018-03-01

    This article presents an experiment of water depth inversion using the band ratio method in Dongsha Island shallow water. The remote sensing data is from QuickBird satellite on April 19, 2004. The bathymetry result shows an extensive agreement with the charted depths. 129 points from the chart depth data were chosen to evaluate the accuracy of the inversion depth. The results show that when the water depth is less than 20m, the inversion depth is accord with the chart, while the water depth is more than 20m, the inversion depth is still among 15- 25m. Therefore, the remote sensing methods can only be effective with the inversion of 20m in Dongsha Island shallow water, rather than in deep water area. The total of 109 depth points less than 20m were used to evaluate the accuracy, the root mean square error is 2.2m.

  8. Temperature-Dependence of Air-Broadened Line Widths and Shifts in the nu3 Band of Ozone

    NASA Technical Reports Server (NTRS)

    Smith, Mary A. H.; Rinsland, Curtis P.; Devi, V. Malathy; Benner, D. Chris; Cox, A. M.

    2006-01-01

    The 9.6-micron bands of O3 are used by many remote-sensing experiments for retrievals of terrestrial atmospheric ozone concentration profiles. Line parameter errors can contribute significantly to the total errors in these retrievals, particularly for nadir-viewing. The McMath-Pierce Fourier transform spectrometer at the National Solar Observatory on Kitt Peak was used to record numerous high-resolution infrared absorption spectra of O3 broadened by various gases at temperatures between 160 and 300 K. Over 30 spectra were analyzed simultaneously using a multispectrum nonlinear least squares fitting technique to determine Lorentz air-broadening and pressure-induced shift coefficients along with their temperature dependences for selected transitions in the 3 fundamental band of (16)O3. We compare the present results with other measurements reported in the literature and with the ozone parameters on the 2000 and 2004 editions of the HITRAN database.

  9. Remote estimation of colored dissolved organic matter and chlorophyll-a in Lake Huron using Sentinel-2 measurements

    NASA Astrophysics Data System (ADS)

    Chen, Jiang; Zhu, Weining; Tian, Yong Q.; Yu, Qian; Zheng, Yuhan; Huang, Litong

    2017-07-01

    Colored dissolved organic matter (CDOM) and chlorophyll-a (Chla) are important water quality parameters and play crucial roles in aquatic environment. Remote sensing of CDOM and Chla concentrations for inland lakes is often limited by low spatial resolution. The newly launched Sentinel-2 satellite is equipped with high spatial resolution (10, 20, and 60 m). Empirical band ratio models were developed to derive CDOM and Chla concentrations in Lake Huron. The leave-one-out cross-validation method was used for model calibration and validation. The best CDOM retrieval algorithm is a B3/B5 model with accuracy coefficient of determination (R2)=0.884, root-mean-squared error (RMSE)=0.731 m-1, relative root-mean-squared error (RRMSE)=28.02%, and bias=-0.1 m-1. The best Chla retrieval algorithm is a B5/B4 model with accuracy R2=0.49, RMSE=9.972 mg/m3, RRMSE=48.47%, and bias=-0.116 mg/m3. Neural network models were further implemented to improve inversion accuracy. The applications of the two best band ratio models to Sentinel-2 imagery with 10 m×10 m pixel size presented the high potential of the sensor for monitoring water quality of inland lakes.

  10. On a stronger-than-best property for best prediction

    NASA Astrophysics Data System (ADS)

    Teunissen, P. J. G.

    2008-03-01

    The minimum mean squared error (MMSE) criterion is a popular criterion for devising best predictors. In case of linear predictors, it has the advantage that no further distributional assumptions need to be made, other then about the first- and second-order moments. In the spatial and Earth sciences, it is the best linear unbiased predictor (BLUP) that is used most often. Despite the fact that in this case only the first- and second-order moments need to be known, one often still makes statements about the complete distribution, in particular when statistical testing is involved. For such cases, one can do better than the BLUP, as shown in Teunissen (J Geod. doi: 10.1007/s00190-007-0140-6, 2006), and thus devise predictors that have a smaller MMSE than the BLUP. Hence, these predictors are to be preferred over the BLUP, if one really values the MMSE-criterion. In the present contribution, we will show, however, that the BLUP has another optimality property than the MMSE-property, provided that the distribution is Gaussian. It will be shown that in the Gaussian case, the prediction error of the BLUP has the highest possible probability of all linear unbiased predictors of being bounded in the weighted squared norm sense. This is a stronger property than the often advertised MMSE-property of the BLUP.

  11. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  12. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1985-01-01

    Research activities conducted under the auspices of National Aeronautics and Space Administration Cooperative Agreement NCC 9-9 are discussed. During this contract period research efforts are concentrated in two primary areas. The first are is an investigation of the use of measurement error models as alternatives to least squares regression estimators of crop production or timber biomass. The secondary primary area of investigation is on the estimation of the mixing proportion of two-component mixture models. This report lists publications, technical reports, submitted manuscripts, and oral presentation generated by these research efforts. Possible areas of future research are mentioned.

  13. Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications

    NASA Astrophysics Data System (ADS)

    Qian, Xuewen; Deng, Honggui; He, Hailang

    2017-10-01

    Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.

  14. Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad

    2018-05-01

    The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.

  15. Accelerated 1 H MRSI using randomly undersampled spiral-based k-space trajectories.

    PubMed

    Chatnuntawech, Itthi; Gagoski, Borjan; Bilgic, Berkin; Cauley, Stephen F; Setsompop, Kawin; Adalsteinsson, Elfar

    2014-07-30

    To develop and evaluate the performance of an acquisition and reconstruction method for accelerated MR spectroscopic imaging (MRSI) through undersampling of spiral trajectories. A randomly undersampled spiral acquisition and sensitivity encoding (SENSE) with total variation (TV) regularization, random SENSE+TV, is developed and evaluated on single-slice numerical phantom, in vivo single-slice MRSI, and in vivo three-dimensional (3D)-MRSI at 3 Tesla. Random SENSE+TV was compared with five alternative methods for accelerated MRSI. For the in vivo single-slice MRSI, random SENSE+TV yields up to 2.7 and 2 times reduction in root-mean-square error (RMSE) of reconstructed N-acetyl aspartate (NAA), creatine, and choline maps, compared with the denoised fully sampled and uniformly undersampled SENSE+TV methods with the same acquisition time, respectively. For the in vivo 3D-MRSI, random SENSE+TV yields up to 1.6 times reduction in RMSE, compared with uniform SENSE+TV. Furthermore, by using random SENSE+TV, we have demonstrated on the in vivo single-slice and 3D-MRSI that acceleration factors of 4.5 and 4 are achievable with the same quality as the fully sampled data, as measured by RMSE of reconstructed NAA map, respectively. With the same scan time, random SENSE+TV yields lower RMSEs of metabolite maps than other methods evaluated. Random SENSE+TV achieves up to 4.5-fold acceleration with comparable data quality as the fully sampled acquisition. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc. © 2014 Wiley Periodicals, Inc.

  16. [INVITED] Luminescent QR codes for smart labelling and sensing

    NASA Astrophysics Data System (ADS)

    Ramalho, João F. C. B.; António, L. C. F.; Correia, S. F. H.; Fu, L. S.; Pinho, A. S.; Brites, C. D. S.; Carlos, L. D.; André, P. S.; Ferreira, R. A. S.

    2018-05-01

    QR (Quick Response) codes are two-dimensional barcodes composed of special geometric patterns of black modules in a white square background that can encode different types of information with high density and robustness, correct errors and physical damages, thus keeping the stored information protected. Recently, these codes have gained increased attention as they offer a simple physical tool for quick access to Web sites for advertising and social interaction. Challenges encompass the increase of the storage capacity limit, even though they can store approximately 350 times more information than common barcodes, and encode different types of characters (e.g., numeric, alphanumeric, kanji and kana). In this work, we fabricate luminescent QR codes based on a poly(methyl methacrylate) substrate coated with organic-inorganic hybrid materials doped with trivalent terbium (Tb3+) and europium (Eu3+) ions, demonstrating the increase of storage capacity per unit area by a factor of two by using the colour multiplexing, when compared to conventional QR codes. A novel methodology to decode the multiplexed QR codes is developed based on a colour separation threshold where a decision level is calculated through a maximum-likelihood criteria to minimize the error probability of the demultiplexed modules, maximizing the foreseen total storage capacity. Moreover, the thermal dependence of the emission colour coordinates of the Eu3+/Tb3+-based hybrids enables the simultaneously QR code colour-multiplexing and may be used to sense temperature (reproducibility higher than 93%), opening new fields of applications for QR codes as smart labels for sensing.

  17. Principal components of wrist circumduction from electromagnetic surgical tracking.

    PubMed

    Rasquinha, Brian J; Rainbow, Michael J; Zec, Michelle L; Pichora, David R; Ellis, Randy E

    2017-02-01

    An electromagnetic (EM) surgical tracking system was used for a functionally calibrated kinematic analysis of wrist motion. Circumduction motions were tested for differences in subject gender and for differences in the sense of the circumduction as clockwise or counter-clockwise motion. Twenty subjects were instrumented for EM tracking. Flexion-extension motion was used to identify the functional axis. Subjects performed unconstrained wrist circumduction in a clockwise and counter-clockwise sense. Data were decomposed into orthogonal flexion-extension motions and radial-ulnar deviation motions. PCA was used to concisely represent motions. Nonparametric Wilcoxon tests were used to distinguish the groups. Flexion-extension motions were projected onto a direction axis with a root-mean-square error of [Formula: see text]. Using the first three principal components, there was no statistically significant difference in gender (all [Formula: see text]). For motion sense, radial-ulnar deviation distinguished the sense of circumduction in the first principal component ([Formula: see text]) and in the third principal component ([Formula: see text]); flexion-extension distinguished the sense in the second principal component ([Formula: see text]). The clockwise sense of circumduction could be distinguished by a multifactorial combination of components; there were no gender differences in this small population. These data constitute a baseline for normal wrist circumduction. The multifactorial PCA findings suggest that a higher-dimensional method, such as manifold analysis, may be a more concise way of representing circumduction in human joints.

  18. Assimilation of remote sensing observations into a sediment transport model of China's largest freshwater lake: spatial and temporal effects.

    PubMed

    Zhang, Peng; Chen, Xiaoling; Lu, Jianzhong; Zhang, Wei

    2015-12-01

    Numerical models are important tools that are used in studies of sediment dynamics in inland and coastal waters, and these models can now benefit from the use of integrated remote sensing observations. This study explores a scheme for assimilating remotely sensed suspended sediment (from charge-coupled device (CCD) images obtained from the Huanjing (HJ) satellite) into a two-dimensional sediment transport model of Poyang Lake, the largest freshwater lake in China. Optimal interpolation is used as the assimilation method, and model predictions are obtained by combining four remote sensing images. The parameters for optimal interpolation are determined through a series of assimilation experiments evaluating the sediment predictions based on field measurements. The model with assimilation of remotely sensed sediment reduces the root-mean-square error of the predicted sediment concentrations by 39.4% relative to the model without assimilation, demonstrating the effectiveness of the assimilation scheme. The spatial effect of assimilation is explored by comparing model predictions with remotely sensed sediment, revealing that the model with assimilation generates reasonable spatial distribution patterns of suspended sediment. The temporal effect of assimilation on the model's predictive capabilities varies spatially, with an average temporal effect of approximately 10.8 days. The current velocities which dominate the rate and direction of sediment transport most likely result in spatial differences in the temporal effect of assimilation on model predictions.

  19. Gait Phase Estimation Based on Noncontact Capacitive Sensing and Adaptive Oscillators.

    PubMed

    Zheng, Enhao; Manca, Silvia; Yan, Tingfang; Parri, Andrea; Vitiello, Nicola; Wang, Qining

    2017-10-01

    This paper presents a novel strategy aiming to acquire an accurate and walking-speed-adaptive estimation of the gait phase through noncontact capacitive sensing and adaptive oscillators (AOs). The capacitive sensing system is designed with two sensing cuffs that can measure the leg muscle shape changes during walking. The system can be dressed above the clothes and free human skin from contacting to electrodes. In order to track the capacitance signals, the gait phase estimator is designed based on the AO dynamic system due to its ability of synchronizing with quasi-periodic signals. After the implementation of the whole system, we first evaluated the offline estimation performance by experiments with 12 healthy subjects walking on a treadmill with changing speeds. The strategy achieved an accurate and consistent gait phase estimation with only one channel of capacitance signal. The average root-mean-square errors in one stride were 0.19 rad (3.0% of one gait cycle) for constant walking speeds and 0.31 rad (4.9% of one gait cycle) for speed transitions even after the subjects rewore the sensing cuffs. We then validated our strategy in a real-time gait phase estimation task with three subjects walking with changing speeds. Our study indicates that the strategy based on capacitive sensing and AOs is a promising alternative for the control of exoskeleton/orthosis.

  20. [Retrieval of crown closure of moso bamboo forest using unmanned aerial vehicle (UAV) remotely sensed imagery based on geometric-optical model].

    PubMed

    Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long

    2015-05-01

    This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.

  1. Exception handling for sensor fusion

    NASA Astrophysics Data System (ADS)

    Chavez, G. T.; Murphy, Robin R.

    1993-08-01

    This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.

  2. Adaptive elimination of optical fiber transmission noise in fiber ocean bottom seismic system

    NASA Astrophysics Data System (ADS)

    Zhong, Qiuwen; Hu, Zhengliang; Cao, Chunyan; Dong, Hongsheng

    2017-10-01

    In this paper, a pressure and acceleration insensitive reference Interferometer is used to obtain laser and public noise introduced by transmission fiber and laser. By using direct subtraction and adaptive filtering, this paper attempts to eliminate and estimation the transmission noise of sensing probe. This paper compares the noise suppression effect of four methods, including the direct subtraction (DS), the least mean square error adaptive elimination (LMS), the normalized least mean square error adaptive elimination (NLMS) and the least square (RLS) adaptive filtering. The experimental results show that the noise reduction effect of RLS and NLMS are almost the same, better than LMS and DS, which can reach 8dB (@100Hz). But considering the workload, RLS is not conducive to the real-time operating system. When it comes to the same treatment effect, the practicability of NLMS is higher than RLS. The noise reduction effect of LMS is slightly worse than that of RLS and NLMS, about 6dB (@100Hz), but its computational complexity is small, which is beneficial to the real time system implementation. It can also be seen that the DS method has the least amount of computational complexity, but the noise suppression effect is worse than that of the adaptive filter due to the difference of the noise amplitude between the RI and the SI, only 4dB (@100Hz) can be reached. The adaptive filter can basically eliminate the influence of the transmission noise, and the simulation signal of the sensor is kept intact.

  3. Ultrasonic tracking of shear waves using a particle filter.

    PubMed

    Ingle, Atul N; Ma, Chi; Varghese, Tomy

    2015-11-01

    This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques.

  4. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  5. Pan Sharpening Quality Investigation of Turkish In-Operation Remote Sensing Satellites: Applications with Rasat and GÖKTÜRK-2 Images

    NASA Astrophysics Data System (ADS)

    Ozendi, Mustafa; Topan, Hüseyin; Cam, Ali; Bayık, Çağlar

    2016-10-01

    Recently two optical remote sensing satellites, RASAT and GÖKTÜRK-2, launched successfully by the Republic of Turkey. RASAT has 7.5 m panchromatic, and 15 m visible bands whereas GÖKTÜRK-2 has 2.5 m panchromatic and 5 m VNIR (Visible and Near Infrared) bands. These bands with various resolutions can be fused by pan-sharpening methods which is an important application area of optical remote sensing imagery. So that, the high geometric resolution of panchromatic band and the high spectral resolution of VNIR bands can be merged. In the literature there are many pan-sharpening methods. However, there is not a standard framework for quality investigation of pan-sharpened imagery. The aim of this study is to investigate pan-sharpening performance of RASAT and GÖKTÜRK-2 images. For this purpose, pan-sharpened images are generated using most popular pan-sharpening methods IHS, Brovey and PCA at first. This procedure is followed by quantitative evaluation of pan-sharpened images using Correlation Coefficient (CC), Root Mean Square Error (RMSE), Relative Average Spectral Error (RASE), Spectral Angle Mapper (SAM) and Erreur Relative Globale Adimensionnelle de Synthése (ERGAS) metrics. For generation of pan-sharpened images and computation of metrics SharpQ tool is used which is developed with MATLAB computing language. According to metrics, PCA derived pan-sharpened image is the most similar one to multispectral image for RASAT, and Brovey derived pan-sharpened image is the most similar one to multispectral image for GÖKTÜRK-2. Finally, pan-sharpened images are evaluated qualitatively in terms of object availability and completeness for various land covers (such as urban, forest and flat areas) by a group of operators who are experienced in remote sensing imagery.

  6. Creation of a Digital Surface Model and Extraction of Coarse Woody Debris from Terrestrial Laser Scans in an Open Eucalypt Woodland

    NASA Astrophysics Data System (ADS)

    Muir, J.; Phinn, S. R.; Armston, J.; Scarth, P.; Eyre, T.

    2014-12-01

    Coarse woody debris (CWD) provides important habitat for many species and plays a vital role in nutrient cycling within an ecosystem. In addition, CWD makes an important contribution to forest biomass and fuel loads. Airborne or space based remote sensing instruments typically do not detect CWD beneath the forest canopy. Terrestrial laser scanning (TLS) provides a ground based method for three-dimensional (3-D) reconstruction of surface features and CWD. This research produced a 3-D reconstruction of the ground surface and automatically classified coarse woody debris from registered TLS scans. The outputs will be used to inform the development of a site-based index for the assessment of forest condition, and quantitative assessments of biomass and fuel loads. A survey grade terrestrial laser scanner (Riegl VZ400) was used to scan 13 positions, in an open eucalypt woodland site at Karawatha Forest Park, near Brisbane, Australia. Scans were registered, and a digital surface model (DSM) produced using an intensity threshold and an iterative morphological filter. The DSMs produced from single scans were compared to the registered multi-scan point cloud using standard error metrics including: Root Mean Squared Error (RMSE), Mean Squared Error (MSE), range, absolute error and signed error. In addition the DSM was compared to a Digital Elevation Model (DEM) produced from Airborne Laser Scanning (ALS). Coarse woody debris was subsequently classified from the DSM using laser pulse properties, including: width and amplitude, as well as point spatial relationships (e.g. nearest neighbour slope vectors). Validation of the coarse woody debris classification was completed using true-colour photographs co-registered to the TLS point cloud. The volume and length of the coarse woody debris was calculated from the classified point cloud. A representative network of TLS sites will allow for up-scaling to large area assessment using airborne or space based sensors to monitor forest condition, biomass and fuel loads.

  7. Noise-Enhanced Eversion Force Sense in Ankles With or Without Functional Instability.

    PubMed

    Ross, Scott E; Linens, Shelley W; Wright, Cynthia J; Arnold, Brent L

    2015-08-01

    Force sense impairments are associated with functional ankle instability. Stochastic resonance stimulation (SRS) may have implications for correcting these force sense deficits. To determine if SRS improved force sense. Case-control study. Research laboratory. Twelve people with functional ankle instability (age = 23 ± 3 years, height = 174 ± 8 cm, mass = 69 ± 10 kg) and 12 people with stable ankles (age = 22 ± 2 years, height = 170 ± 7 cm, mass = 64 ± 10 kg). The eversion force sense protocol required participants to reproduce a targeted muscle tension (10% of maximum voluntary isometric contraction). This protocol was assessed under SRSon and SRSoff (control) conditions. During SRSon, random subsensory mechanical noise was applied to the lower leg at a customized optimal intensity for each participant. Constant error, absolute error, and variable error measures quantified accuracy, overall performance, and consistency of force reproduction, respectively. With SRS, we observed main effects for force sense absolute error (SRSoff = 1.01 ± 0.67 N, SRSon = 0.69 ± 0.42 N) and variable error (SRSoff = 1.11 ± 0.64 N, SRSon = 0.78 ± 0.56 N) (P < .05). No other main effects or treatment-by-group interactions were found (P > .05). Although SRS reduced the overall magnitude (absolute error) and variability (variable error) of force sense errors, it had no effect on the directionality (constant error). Clinically, SRS may enhance muscle tension ability, which could have treatment implications for ankle stability.

  8. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  9. Microwave Photonic Architecture for Direction Finding of LPI Emitters: Post-Processing for Angle of Arrival Estimation

    DTIC Science & Technology

    2016-09-01

    mean- square (RMS) error of 0.29° at ə° resolution. For a P4 coded signal, the RMS error in estimating the AOA is 0.32° at 1° resolution. 14...FMCW signal, it was demonstrated that the system is capable of estimating the AOA with a root-mean- square (RMS) error of 0.29° at ə° resolution. For a...Modulator PCB printed circuit board PD photodetector RF radio frequency RMS root-mean- square xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii

  10. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  11. Inferring river properties with SWOT like data

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre-André; Monnier, Jérôme; Roux, Hélène

    2014-05-01

    Inverse problems in hydraulics are still open questions such as the estimation of river discharges. Remotely sensed measurements of hydrosystems can provide valuable information but adequate methods are still required to exploit it. The future Surface Water and Ocean Topography (SWOT) mission would provide new cartographic measurements of inland water surfaces. The highlight of SWOT will be its almost global coverage and temporal revisits on the order of 1 to 4 times per 22 days repeat cycle [1]. Lots of studies have shown the possibility of retrieving discharge given the river bathymetry or roughness and/or in situ time series. The new challenge is to use SWOT type data to inverse the triplet formed by the roughness, the bathymetry and the discharge. The method presented here is composed of two steps: following an inverse formulation from [2], the first step consists in retrieving an equivalent bathymetry profile of a river given one in situ depth measurement and SWOT like data of the water surface, that is to say water elevation, free surface slope and width. From this equivalent bathymetry, the second step consists in solving mass and Manning equation in the least square sense [3]. Nevertheless, for cases where no in situ measurement of water depth is available, it is still possible to solve a system formed by mass and Manning equations in the least square sense (or with other methods such as Bayesian ones, see e.g. [4]). We show that a good a priori knowledge of bathymetry and roughness is compulsory for such methods. Depending on this a priori knowledge, the inversion of the triplet (roughness, bathymetry, discharge) in SWOT context was evaluated on the Garonne River [5, 6]. The results are presented on 80 km of the Garonne River downstream of Toulouse in France [7]. An equivalent bathymetry is retrieved with less than 10% relative error with SWOT like observations. After that, encouraging results are obtained with less than 10% relative error on the identified discharge. References [1] E. Rodriguez, SWOT science requirements document, JPL document, JPL, 2012. [2] A. Gessese, K. Wa, and M. Sellier, Bathymetry reconstruction based on the zero-inertia shallow water approximation, Theoretical and Computational Fluid Dynamics, vol. 27, no. 5, pp. 721-732, 2013. [3] P. A. Garambois and J. Monnier, Inference of river properties from remotly sensed observations of water surface, under final redaction for HESS, 2014. [4] M. Durand, Sacramento river airswot discharge estimation scenario. http://swotdawg.wordpress.com/2013/04/18/sacramento-river-airswot-discharge-estimation-scenario/, 2013. [5] P. A. Garambois and H. Roux, Garonne River discharge estimation. http://swotdawg.wordpress.com/2013/07/01/garonne-river-discharge-estimation/, 2013. [6] P. A. Garambois and H. Roux, Sensitivity of discharge uncertainty to measurement errors, case of the Garonne River. http://swotdawg.wordpress.com/2013/07/01/sensitivity-of-discharge-uncertainty-to-measurement-errors-case-of-the-garonne-river/, 2013. [7] H. Roux and P. A. Garambois, Tests of reach averaging and manning equation on the Garonne River. http://swotdawg.wordpress.com/2013/07/01/tests-of-reach-averaging-and-manning-equation-on-the-garonne-river/, 2013.

  12. Calibration of remotely sensed proportion or area estimates for misclassification error

    Treesearch

    Raymond L. Czaplewski; Glenn P. Catts

    1992-01-01

    Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...

  13. Determination of suitable drying curve model for bread moisture loss during baking

    NASA Astrophysics Data System (ADS)

    Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.

    2013-03-01

    This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.

  14. Remote sensing of ocean currents

    NASA Technical Reports Server (NTRS)

    Goldstein, R. M.; Zebker, H. A.; Barnett, T. P.

    1989-01-01

    A method of remotely measuring near-surface ocean currents with a synthetic aperture radar (SAR) is described. The apparatus consists of a single SAR transmitter and two receiving antennas. The phase difference between SAR image scenes obtained from the antennas forms an interferogram that is directly proportional to the surface current. The first field test of this technique against conventional measurements gives estimates of mean currents accurate to order 20 percent, that is, root-mean-square errors of 5 to 10 centimeters per second in mean flows of 27 to 56 centimeters per second. If the full potential of the method could be realized with spacecraft, then it might be possible to routinely monitor the surface currents of the world's oceans.

  15. EKF-Based Enhanced Performance Controller Design for Nonlinear Stochastic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yuyang; Zhang, Qichun; Wang, Hong

    In this paper, a novel control algorithm is presented to enhance the performance of tracking property for a class of non-linear dynamic stochastic systems with unmeasurable variables. To minimize the entropy of tracking errors without changing the existing closed loop with PI controller, the enhanced performance loop is constructed based on the state estimation by extended Kalman Filter and the new controller is designed by full state feedback following this presented control algorithm. Besides, the conditions are obtained for the stability analysis in the mean square sense. In the end, the comparative simulation results are given to illustrate the effectivenessmore » of proposed control algorithm.« less

  16. Analysis of randomly time varying systems by gaussian closure technique

    NASA Astrophysics Data System (ADS)

    Dash, P. K.; Iyengar, R. N.

    1982-07-01

    The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.

  17. Correcting Four Similar Correlational Measures for Attenuation Due to Errors of Measurement in the Dependent Variable: Eta, Epsilon, Omega, and Intraclass r.

    ERIC Educational Resources Information Center

    Stanley, Julian C.; Livingston, Samuel A.

    Besides the ubiquitous Pearson product-moment r, there are a number of other measures of relationship that are attenuated by errors of measurement and for which the relationship between true measures can be estimated. Among these are the correlation ratio (eta squared), Kelley's unbiased correlation ratio (epsilon squared), Hays' omega squared,…

  18. Correlation between Trunk Posture and Neck Reposition Sense among Subjects with Forward Head Neck Postures

    PubMed Central

    Lee, Han Suk; Chung, Hyung Kuk; Park, Sun Wook

    2015-01-01

    Objective. To assess the correlation of abnormal trunk postures and reposition sense of subjects with forward head neck posture (FHP). Methods. In all, postures of 41 subjects were evaluated and the FHP and trunk posture including shoulder, scapular level, pelvic side, and anterior tilting degrees were analyzed. We used the head repositioning accuracy (HRA) test to evaluate neck position senses of neck flexion, neck extension, neck right and left side flexion, and neck right and left rotation and calculated the root mean square error in trials for each subject. Spearman's rank correlation coefficients and regression analysis were used to assess the degree of correlation between the trunk posture and HRA value, and a significance level of α = 0.05 was considered. Results. There were significant correlations between the HRA value of right side neck flexion and pelvic side tilt angle (p < 0.05). If pelvic side tilting angle increases by 1 degree, right side neck flexion increased by 0.76 degrees (p = 0.026). However, there were no significant correlations between other neck motions and trunk postures. Conclusion. Verifying pelvic postures should be prioritized when movement is limited due to the vitiation of the proprioceptive sense of neck caused by FHP. PMID:26583125

  19. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    NASA Astrophysics Data System (ADS)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-04-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.

  20. Comparison of support vector machine classification to partial least squares dimension reduction with logistic descrimination of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Wilson, Machelle; Ustin, Susan L.; Rocke, David

    2003-03-01

    Remote sensing technologies with high spatial and spectral resolution show a great deal of promise in addressing critical environmental monitoring issues, but the ability to analyze and interpret the data lags behind the technology. Robust analytical methods are required before the wealth of data available through remote sensing can be applied to a wide range of environmental problems for which remote detection is the best method. In this study we compare the classification effectiveness of two relatively new techniques on data consisting of leaf-level reflectance from plants that have been exposed to varying levels of heavy metal toxicity. If these methodologies work well on leaf-level data, then there is some hope that they will also work well on data from airborne and space-borne platforms. The classification methods compared were support vector machine classification of exposed and non-exposed plants based on the reflectance data, and partial east squares compression of the reflectance data followed by classification using logistic discrimination (PLS/LD). PLS/LD was performed in two ways. We used the continuous concentration data as the response during compression, and then used the binary response required during logistic discrimination. We also used a binary response during compression followed by logistic discrimination. The statistics we used to compare the effectiveness of the methodologies was the leave-one-out cross validation estimate of the prediction error.

  1. Continuous glucose determination using fiber-based tunable mid-infrared laser spectroscopy

    NASA Astrophysics Data System (ADS)

    Yu, Songlin; Li, Dachao; Chong, Hao; Sun, Changyue; Xu, Kexin

    2014-04-01

    Wavelength-tunable laser spectroscopy in combination with a small-sized fiber-optic attenuated total reflection (ATR) sensor (fiber-based evanescent field analysis, FEFA) is reported for the continuous measurement of the glucose level. We propose a method of controlling and stabilizing the wavelength and power of laser emission and present a newly developed mid-infrared wavelength-tunable laser with a broad emission spectrum band of 9.19-9.77 μm (1024-1088 cm-1). The novel small-sized flow-through fiber-optic ATR sensor with long optical sensing length was used for glucose level determination. The experimental results indicate that the noise-equivalent concentration of this laser measurement system is as low as 3.8 mg/dL, which is among the most precise glucose measurements using mid-infrared spectroscopy. The sensitivity, which is three times that of conventional Fourier transform infrared spectrometer, was acquired because of the higher laser power and higher spectral resolution. The best prediction of the glucose concentration in phosphate buffered saline solution was achieved using the five-variable partial least-squares model, yielding a root-mean-square error of prediction as small as 3.5 mg/dL. The high sensitivity, multiple tunable wavelengths and small fiber-based sensor with long optical sensing length make glucose determination possible in blood or interstitial fluid in vivo.

  2. The Effectiveness of Using Limited Gauge Measurements for Bias Adjustment of Satellite-Based Precipitation Estimation over Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alharbi, Raied; Hsu, Kuolin; Sorooshian, Soroosh; Braithwaite, Dan

    2018-01-01

    Precipitation is a key input variable for hydrological and climate studies. Rain gauges are capable of providing reliable precipitation measurements at point scale. However, the uncertainty of rain measurements increases when the rain gauge network is sparse. Satellite -based precipitation estimations appear to be an alternative source of precipitation measurements, but they are influenced by systematic bias. In this study, a method for removing the bias from the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) over a region where the rain gauge is sparse is investigated. The method consists of monthly empirical quantile mapping, climate classification, and inverse-weighted distance method. Daily PERSIANN-CCS is selected to test the capability of the method for removing the bias over Saudi Arabia during the period of 2010 to 2016. The first six years (2010 - 2015) are calibrated years and 2016 is used for validation. The results show that the yearly correlation coefficient was enhanced by 12%, the yearly mean bias was reduced by 93% during validated year. Root mean square error was reduced by 73% during validated year. The correlation coefficient, the mean bias, and the root mean square error show that the proposed method removes the bias on PERSIANN-CCS effectively that the method can be applied to other regions where the rain gauge network is sparse.

  3. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  4. Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.

    PubMed

    Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin

    2018-03-01

    The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Estimation of Biomass and Canopy Height in Bermudagrass, Alfalfa, and Wheat Using Ultrasonic, Laser, and Spectral Sensors

    PubMed Central

    Pittman, Jeremy Joshua; Arnall, Daryl Brian; Interrante, Sindy M.; Moffet, Corey A.; Butler, Twain J.

    2015-01-01

    Non-destructive biomass estimation of vegetation has been performed via remote sensing as well as physical measurements. An effective method for estimating biomass must have accuracy comparable to the accepted standard of destructive removal. Estimation or measurement of height is commonly employed to create a relationship between height and mass. This study examined several types of ground-based mobile sensing strategies for forage biomass estimation. Forage production experiments consisting of alfalfa (Medicago sativa L.), bermudagrass [Cynodon dactylon (L.) Pers.], and wheat (Triticum aestivum L.) were employed to examine sensor biomass estimation (laser, ultrasonic, and spectral) as compared to physical measurements (plate meter and meter stick) and the traditional harvest method (clipping). Predictive models were constructed via partial least squares regression and modeled estimates were compared to the physically measured biomass. Least significant difference separated mean estimates were examined to evaluate differences in the physical measurements and sensor estimates for canopy height and biomass. Differences between methods were minimal (average percent error of 11.2% for difference between predicted values versus machine and quadrat harvested biomass values (1.64 and 4.91 t·ha−1, respectively), except at the lowest measured biomass (average percent error of 89% for harvester and quad harvested biomass < 0.79 t·ha−1) and greatest measured biomass (average percent error of 18% for harvester and quad harvested biomass >6.4 t·ha−1). These data suggest that using mobile sensor-based biomass estimation models could be an effective alternative to the traditional clipping method for rapid, accurate in-field biomass estimation. PMID:25635415

  6. Scalable L-infinite coding of meshes.

    PubMed

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  7. Triple collocation-based estimation of spatially correlated observation error covariance in remote sensing soil moisture data assimilation

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Shu, Hong; Nie, Lei; Jiao, Zhenhang

    2018-01-01

    Spatially correlated errors are typically ignored in data assimilation, thus degenerating the observation error covariance R to a diagonal matrix. We argue that a nondiagonal R carries more observation information making assimilation results more accurate. A method, denoted TC_Cov, was proposed for soil moisture data assimilation to estimate spatially correlated observation error covariance based on triple collocation (TC). Assimilation experiments were carried out to test the performance of TC_Cov. AMSR-E soil moisture was assimilated with a diagonal R matrix computed using the TC and assimilated using a nondiagonal R matrix, as estimated by proposed TC_Cov. The ensemble Kalman filter was considered as the assimilation method. Our assimilation results were validated against climate change initiative data and ground-based soil moisture measurements using the Pearson correlation coefficient and unbiased root mean square difference metrics. These experiments confirmed that deterioration of diagonal R assimilation results occurred when model simulation is more accurate than observation data. Furthermore, nondiagonal R achieved higher correlation coefficient and lower ubRMSD values over diagonal R in experiments and demonstrated the effectiveness of TC_Cov to estimate richly structuralized R in data assimilation. In sum, compared with diagonal R, nondiagonal R may relieve the detrimental effects of assimilation when simulated model results outperform observation data.

  8. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  9. A robust nonlinear filter for image restoration.

    PubMed

    Koivunen, V

    1995-01-01

    A class of nonlinear regression filters based on robust estimation theory is introduced. The goal of the filtering is to recover a high-quality image from degraded observations. Models for desired image structures and contaminating processes are employed, but deviations from strict assumptions are allowed since the assumptions on signal and noise are typically only approximately true. The robustness of filters is usually addressed only in a distributional sense, i.e., the actual error distribution deviates from the nominal one. In this paper, the robustness is considered in a broad sense since the outliers may also be due to inappropriate signal model, or there may be more than one statistical population present in the processing window, causing biased estimates. Two filtering algorithms minimizing a least trimmed squares criterion are provided. The design of the filters is simple since no scale parameters or context-dependent threshold values are required. Experimental results using both real and simulated data are presented. The filters effectively attenuate both impulsive and nonimpulsive noise while recovering the signal structure and preserving interesting details.

  10. Ultrasonic tracking of shear waves using a particle filter

    PubMed Central

    Ingle, Atul N.; Ma, Chi; Varghese, Tomy

    2015-01-01

    Purpose: This paper discusses an application of particle filtering for estimating shear wave velocity in tissue using ultrasound elastography data. Shear wave velocity estimates are of significant clinical value as they help differentiate stiffer areas from softer areas which is an indicator of potential pathology. Methods: Radio-frequency ultrasound echo signals are used for tracking axial displacements and obtaining the time-to-peak displacement at different lateral locations. These time-to-peak data are usually very noisy and cannot be used directly for computing velocity. In this paper, the denoising problem is tackled using a hidden Markov model with the hidden states being the unknown (noiseless) time-to-peak values. A particle filter is then used for smoothing out the time-to-peak curve to obtain a fit that is optimal in a minimum mean squared error sense. Results: Simulation results from synthetic data and finite element modeling suggest that the particle filter provides lower mean squared reconstruction error with smaller variance as compared to standard filtering methods, while preserving sharp boundary detail. Results from phantom experiments show that the shear wave velocity estimates in the stiff regions of the phantoms were within 20% of those obtained from a commercial ultrasound scanner and agree with estimates obtained using a standard method using least-squares fit. Estimates of area obtained from the particle filtered shear wave velocity maps were within 10% of those obtained from B-mode ultrasound images. Conclusions: The particle filtering approach can be used for producing visually appealing SWV reconstructions by effectively delineating various areas of the phantom with good image quality properties comparable to existing techniques. PMID:26520761

  11. Optimal secondary source position in exterior spherical acoustical holophony

    NASA Astrophysics Data System (ADS)

    Pasqual, A. M.; Martin, V.

    2012-02-01

    Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.

  12. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  13. Modeling a historical mountain pine beetle outbreak using Landsat MSS and multiple lines of evidence

    USGS Publications Warehouse

    Assal, Timothy J.; Sibold, Jason; Reich, Robin M.

    2014-01-01

    Mountain pine beetles are significant forest disturbance agents, capable of inducing widespread mortality in coniferous forests in western North America. Various remote sensing approaches have assessed the impacts of beetle outbreaks over the last two decades. However, few studies have addressed the impacts of historical mountain pine beetle outbreaks, including the 1970s event that impacted Glacier National Park. The lack of spatially explicit data on this disturbance represents both a major data gap and a critical research challenge in that wildfire has removed some of the evidence from the landscape. We utilized multiple lines of evidence to model forest canopy mortality as a proxy for outbreak severity. We incorporate historical aerial and landscape photos, aerial detection survey data, a nine-year collection of satellite imagery and abiotic data. This study presents a remote sensing based framework to (1) relate measurements of canopy mortality from fine-scale aerial photography to coarse-scale multispectral imagery and (2) classify the severity of mountain pine beetle affected areas using a temporal sequence of Landsat data and other landscape variables. We sampled canopy mortality in 261 plots from aerial photos and found that insect effects on mortality were evident in changes to the Normalized Difference Vegetation Index (NDVI) over time. We tested multiple spectral indices and found that a combination of NDVI and the green band resulted in the strongest model. We report a two-step process where we utilize a generalized least squares model to account for the large-scale variability in the data and a binary regression tree to describe the small-scale variability. The final model had a root mean square error estimate of 9.8% canopy mortality, a mean absolute error of 7.6% and an R2 of 0.82. The results demonstrate that a model of percent canopy mortality as a continuous variable can be developed to identify a gradient of mountain pine beetle severity on the landscape.

  14. An Accurately Controlled Antagonistic Shape Memory Alloy Actuator with Self-Sensing

    PubMed Central

    Wang, Tian-Miao; Shi, Zhen-Yun; Liu, Da; Ma, Chen; Zhang, Zhen-Hua

    2012-01-01

    With the progress of miniaturization, shape memory alloy (SMA) actuators exhibit high energy density, self-sensing ability and ease of fabrication, which make them well suited for practical applications. This paper presents a self-sensing controlled actuator drive that was designed using antagonistic pairs of SMA wires. Under a certain pre-strain and duty cycle, the stress between two wires becomes constant. Meanwhile, the strain to resistance curve can minimize the hysteresis gap between the heating and the cooling paths. The curves of both wires are then modeled by fitting polynomials such that the measured resistance can be used directly to determine the difference between the testing values and the target strain. The hysteresis model of strains to duty cycle difference has been used as compensation. Accurate control is demonstrated through step response and sinusoidal tracking. The experimental results show that, under a combination control program, the root-mean-square error can be reduced to 1.093%. The limited bandwidth of the frequency is estimated to be 0.15 Hz. Two sets of instruments with three degrees of freedom are illustrated to show how this type actuator could be potentially implemented. PMID:22969368

  15. A Survey of Terrain Modeling Technologies and Techniques

    DTIC Science & Technology

    2007-09-01

    Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM

  16. An approach for land suitability evaluation using geostatistics, remote sensing, and geographic information system in arid and semiarid ecosystems.

    PubMed

    Emadi, Mostafa; Baghernejad, Majid; Pakparvar, Mojtaba; Kowsar, Sayyed Ahang

    2010-05-01

    This study was undertaken to incorporate geostatistics, remote sensing, and geographic information system (GIS) technologies to improve the qualitative land suitability assessment in arid and semiarid ecosystems of Arsanjan plain, southern Iran. The primary data were obtained from 85 soil samples collected from tree depths (0-30, 30-60, and 60-90 cm); the secondary information was acquired from the remotely sensed data from the linear imaging self-scanner (LISS-III) receiver of the IRS-P6 satellite. Ordinary kriging and simple kriging with varying local means (SKVLM) methods were used to identify the spatial dependency of soil important parameters. It was observed that using the data collected from the spectral values of band 1 of the LISS-III receiver as the secondary variable applying the SKVLM method resulted in the lowest mean square error for mapping the pH and electrical conductivity (ECe) in the 0-30-cm depth. On the other hand, the ordinary kriging method resulted in a reliable accuracy for the other soil properties with moderate to strong spatial dependency in the study area for interpolation in the unstamped points. The parametric land suitability evaluation method was applied on the density points (150 x 150 m(2)) instead of applying on the limited representative profiles conventionally, which were obtained by the kriging or SKVLM methods. Overlaying the information layers of the data was used with the GIS for preparing the final land suitability evaluation. Therefore, changes in land characteristics could be identified in the same soil uniform mapping units over a very short distance. In general, this new method can easily present the squares and limitation factors of the different land suitability classes with considerable accuracy in arbitrary land indices.

  17. Developing the remote sensing-based early warning system for monitoring TSS concentrations in Lake Mead.

    PubMed

    Imen, Sanaz; Chang, Ni-Bin; Yang, Y Jeffrey

    2015-09-01

    Adjustment of the water treatment process to changes in water quality is a focus area for engineers and managers of water treatment plants. The desired and preferred capability depends on timely and quantitative knowledge of water quality monitoring in terms of total suspended solids (TSS) concentrations. This paper presents the development of a suite of nowcasting and forecasting methods by using high-resolution remote-sensing-based monitoring techniques on a daily basis. First, the integrated data fusion and mining (IDFM) technique was applied to develop a near real-time monitoring system for daily nowcasting of the TSS concentrations. Then a nonlinear autoregressive neural network with external input (NARXNET) model was selected and applied for forecasting analysis of the changes in TSS concentrations over time on a rolling basis onward using the IDFM technique. The implementation of such an integrated forecasting and nowcasting approach was assessed by a case study at Lake Mead hosting the water intake for Las Vegas, Nevada, in the water-stressed western U.S. Long-term monthly averaged results showed no simultaneous impact from forest fire events on accelerating the rise of TSS concentration. However, the results showed a probable impact of a decade of drought on increasing TSS concentration in the Colorado River Arm and Overton Arm. Results of the forecasting model highlight the reservoir water level as a significant parameter in predicting TSS in Lake Mead. In addition, the R-squared value of 0.98 and the root mean square error of 0.5 between the observed and predicted TSS values demonstrates the reliability and application potential of this remote sensing-based early warning system in terms of TSS projections at a drinking water intake. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  19. Majority-voted logic fail-sense circuit

    NASA Technical Reports Server (NTRS)

    Mclyman, W. T.

    1977-01-01

    Fail-sense circuit has majority-voted logic component which receives three error voltage signals that are sensed at single point by three error amplifiers. If transistor shorts, only one signal is required to operate; if transistor opens, two signals are required.

  20. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  1. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  3. Least Square Approach for Estimating of Land Surface Temperature from LANDSAT-8 Satellite Data Using Radiative Transfer Equation

    NASA Astrophysics Data System (ADS)

    Jouybari-Moghaddam, Y.; Saradjian, M. R.; Forati, A. M.

    2017-09-01

    Land Surface Temperature (LST) is one of the significant variables measured by remotely sensed data, and it is applied in many environmental and Geoscience studies. The main aim of this study is to develop an algorithm to retrieve the LST from Landsat-8 satellite data using Radiative Transfer Equation (RTE). However, LST can be retrieved from RTE, but, since the RTE has two unknown parameters including LST and surface emissivity, estimating LST from RTE is an under the determined problem. In this study, in order to solve this problem, an approach is proposed an equation set includes two RTE based on Landsat-8 thermal bands (i.e.: band 10 and 11) and two additional equations based on the relation between the Normalized Difference Vegetation Index (NDVI) and emissivity of Landsat-8 thermal bands by using simulated data for Landsat-8 bands. The iterative least square approach was used for solving the equation set. The LST derived from proposed algorithm is evaluated by the simulated dataset, built up by MODTRAN. The result shows the Root Mean Squared Error (RMSE) is less than 1.18°K. Therefore; the proposed algorithm can be a suitable and robust method to retrieve the LST from Landsat-8 satellite data.

  4. Enhanced Performance Controller Design for Stochastic Systems by Adding Extra State Estimation onto the Existing Closed Loop Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yuyang; Zhang, Qichun; Wang, Hong

    To enhance the performance of the tracking property , this paper presents a novel control algorithm for a class of linear dynamic stochastic systems with unmeasurable states, where the performance enhancement loop is established based on Kalman filter. Without changing the existing closed loop with the PI controller, the compensative controller is designed to minimize the variances of the tracking errors using the estimated states and the propagation of state variances. Moreover, the stability of the closed-loop systems has been analyzed in the mean-square sense. A simulated example is included to show the effectiveness of the presented control algorithm, wheremore » encouraging results have been obtained.« less

  5. Analysis of contact zones from whole field isochromatics using reflection photoelasticity

    NASA Astrophysics Data System (ADS)

    Hariprasad, M. P.; Ramesh, K.

    2018-06-01

    This paper discusses the method for evaluating the unknown contact parameters by post processing the whole field fringe order data obtained from reflection photoelasticity in a nonlinear least squares sense. Recent developments in Twelve Fringe Photoelasticity (TFP) for fringe order evaluation from single isochromatics is utilized for the whole field fringe order evaluation. One of the issues in using TFP for reflection photoelasticity is the smudging of isochromatic data at the contact zone. This leads to error in identifying the origin of contact, which is successfully addressed by implementing a semi-automatic contact point refinement algorithm. The methodologies are initially verified for benchmark problems and demonstrated for two application problems of turbine blade and sheet pile contacting interfaces.

  6. Influence of stimulated Brillouin scattering on positioning accuracy of long-range dual Mach-Zehnder interferometric vibration sensors

    NASA Astrophysics Data System (ADS)

    He, Xiangge; Xie, Shangran; Cao, Shan; Liu, Fei; Zheng, Xiaoping; Zhang, Min; Yan, Han; Chen, Guocai

    2016-11-01

    The properties of noise induced by stimulated Brillouin scattering (SBS) in long-range interferometers and their influences on the positioning accuracy of dual Mach-Zehnder interferometric (DMZI) vibration sensing systems are studied. The SBS noise is found to be white and incoherent between the two arms of the interferometer in a 1-MHz bandwidth range. Experiments on 25-km long fibers show that the root mean square error (RMSE) of the positioning accuracy is consistent with the additive noise model for the time delay estimation theory. A low-pass filter can be properly designed to suppress the SBS noise and further achieve a maximum RMSE reduction of 6.7 dB.

  7. High-resolution three-dimensional imaging with compress sensing

    NASA Astrophysics Data System (ADS)

    Wang, Jingyi; Ke, Jun

    2016-10-01

    LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.

  8. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  9. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  10. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  11. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  12. Estimating Error in SRTM Derived Planform of a River in Data-poor Region and Subsequent Impact on Inundation Modeling

    NASA Astrophysics Data System (ADS)

    Bhuyian, M. N. M.; Kalyanapu, A. J.

    2017-12-01

    Accurate representation of river planform is critical for hydrodynamic modeling. Digital elevation models (DEM) often falls short in accurately representing river planform because they show the ground as it was during data acquisition. But, water bodies (i.e. rivers) change their size and shape over time. River planforms are more dynamic in undisturbed riverine systems (mostly located in data-poor regions) where remote sensing is the most convenient source of data. For many of such regions, Shuttle Radar Topographic Mission (SRTM) is the best available source of DEM. Therefore, the objective of this study is to estimate the error in SRTM derived planform of a river in a data-poor region and estimate the subsequent impact on inundation modeling. Analysis of Landsat image, SRTM DEM and remotely sensed soil data was used to classify the planform activity in an 185 km stretch of the Kushiyara River in Bangladesh. In last 15 years, the river eroded about 4.65 square km and deposited 7.55 square km area. Therefore, current (the year 2017) river planform is significantly different than the SRTM water body data which represents the time of SRTM data acquisition (the year 2000). The rate of planform shifting significantly increased as the river traveled to downstream. Therefore, the study area was divided into three reaches (R1, R2, and R3) from upstream to downstream. Channel slope and meandering ratio changed from 2x10-7 and 1.64 in R1 to 1x10-4 and 1.45 in R3. However, more than 60% erosion-deposition occurred in R3 where a high percentage of Fluvisols (98%) and coarse particles (21%) were present in the vicinity of the river. It indicates errors in SRTM water body data (due to planform shifting) could be correlated with the physical properties (i.e. slope, soil type, meandering ratio etc.) of the riverine system. The correlations would help in zoning activity of a riverine system and determine a timeline to update DEM for a given region. Additionally, to estimate the impact of planform shifting on inundation modeling, a hydrodynamic model using an SRTM DEM and a modified SRTM DEM (representing most recent planform) for R3 would be set up. This research would highlight the need for considering planform dynamics in DEM based hydrodynamic modeling.

  13. Predicting tree species presence and basal area in Utah: A comparison of stochastic gradient boosting, generalized additive models, and tree-based methods

    USGS Publications Warehouse

    Moisen, Gretchen G.; Freeman, E.A.; Blackard, J.A.; Frescino, T.S.; Zimmermann, N.E.; Edwards, T.C.

    2006-01-01

    Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's?? See5 and Cubist (for binary and continuous responses, respectively) are the tools of choice in many of these applications. These tools are widely used in large remote sensing applications, but are not easily interpretable, do not have ties with survey estimation methods, and use proprietary unpublished algorithms. Consequently, three alternative modelling techniques were compared for mapping presence and basal area of 13 species located in the mountain ranges of Utah, USA. The modelling techniques compared included the widely used See5/Cubist, generalized additive models (GAMs), and stochastic gradient boosting (SGB). Model performance was evaluated using independent test data sets. Evaluation criteria for mapping species presence included specificity, sensitivity, Kappa, and area under the curve (AUC). Evaluation criteria for the continuous basal area variables included correlation and relative mean squared error. For predicting species presence (setting thresholds to maximize Kappa), SGB had higher values for the majority of the species for specificity and Kappa, while GAMs had higher values for the majority of the species for sensitivity. In evaluating resultant AUC values, GAM and/or SGB models had significantly better results than the See5 models where significant differences could be detected between models. For nine out of 13 species, basal area prediction results for all modelling techniques were poor (correlations less than 0.5 and relative mean squared errors greater than 0.8), but SGB provided the most stable predictions in these instances. SGB and Cubist performed equally well for modelling basal area for three species with moderate prediction success, while all three modelling tools produced comparably good predictions (correlation of 0.68 and relative mean squared error of 0.56) for one species. ?? 2006 Elsevier B.V. All rights reserved.

  14. Horizon sensors attitude errors simulation for the Brazilian Remote Sensing Satellite

    NASA Astrophysics Data System (ADS)

    Vicente de Brum, Antonio Gil; Ricci, Mario Cesar

    Remote sensing, meteorological and other types of satellites require an increasingly better Earth related positioning. From the past experience it is well known that the thermal horizon in the 15 micrometer band provides conditions of determining the local vertical at any time. This detection is done by horizon sensors which are accurate instruments for Earth referred attitude sensing and control whose performance is limited by systematic and random errors amounting about 0.5 deg. Using the computer programs OBLATE, SEASON, ELECTRO and MISALIGN, developed at INPE to simulate four distinct facets of conical scanning horizon sensors, attitude errors are obtained for the Brazilian Remote Sensing Satellite (the first one, SSR-1, is scheduled to fly in 1996). These errors are due to the oblate shape of the Earth, seasonal and latitudinal variations of the 15 micrometer infrared radiation, electronic processing time delay and misalignment of sensor axis. The sensor related attitude errors are thus properly quantified in this work and will, together with other systematic errors (for instance, ambient temperature variation) take part in the pre-launch analysis of the Brazilian Remote Sensing Satellite, with respect to the horizon sensor performance.

  15. Estimating Evaporative Fraction From Readily Obtainable Variables in Mangrove Forests of the Everglades, U.S.A.

    NASA Technical Reports Server (NTRS)

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John; Barr, Jordan

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) the ratio of latent heat (LE; energy equivalent of evapotranspiration -ET-) to total available energy from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micro-meteorological and flux tower observations, or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature [T(sub s)] normalized difference vegetation index (NDVI)and daily maximum air temperature [T(sub a)]. The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using T(sub s) and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the T(sub s) from Landsat relative to the T(sub s) from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  16. Estimating evaporative fraction from readily obtainable variables in mangrove forests of the Everglades, U.S.A.

    USGS Publications Warehouse

    Yagci, Ali Levent; Santanello, Joseph A.; Jones, John W.; Barr, Jordan G.

    2017-01-01

    A remote-sensing-based model to estimate evaporative fraction (EF) – the ratio of latent heat (LE; energy equivalent of evapotranspiration –ET–) to total available energy – from easily obtainable remotely-sensed and meteorological parameters is presented. This research specifically addresses the shortcomings of existing ET retrieval methods such as calibration requirements of extensive accurate in situ micrometeorological and flux tower observations or of a large set of coarse-resolution or model-derived input datasets. The trapezoid model is capable of generating spatially varying EF maps from standard products such as land surface temperature (Ts) normalized difference vegetation index (NDVI) and daily maximum air temperature (Ta). The 2009 model results were validated at an eddy-covariance tower (Fluxnet ID: US-Skr) in the Everglades using Ts and NDVI products from Landsat as well as the Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. Results indicate that the model accuracy is within the range of instrument uncertainty, and is dependent on the spatial resolution and selection of end-members (i.e. wet/dry edge). The most accurate results were achieved with the Ts from Landsat relative to the Ts from the MODIS flown on the Terra and Aqua platforms due to the fine spatial resolution of Landsat (30 m). The bias, mean absolute percentage error and root mean square percentage error were as low as 2.9% (3.0%), 9.8% (13.3%), and 12.1% (16.1%) for Landsat-based (MODIS-based) EF estimates, respectively. Overall, this methodology shows promise for bridging the gap between temporally limited ET estimates at Landsat scales and more complex and difficult to constrain global ET remote-sensing models.

  17. Accuracy Dimensions in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Barsi, Á.; Kugler, Zs.; László, I.; Szabó, Gy.; Abdulmutalib, H. M.

    2018-04-01

    The technological developments in remote sensing (RS) during the past decade has contributed to a significant increase in the size of data user community. For this reason data quality issues in remote sensing face a significant increase in importance, particularly in the era of Big Earth data. Dozens of available sensors, hundreds of sophisticated data processing techniques, countless software tools assist the processing of RS data and contributes to a major increase in applications and users. In the past decades, scientific and technological community of spatial data environment were focusing on the evaluation of data quality elements computed for point, line, area geometry of vector and raster data. Stakeholders of data production commonly use standardised parameters to characterise the quality of their datasets. Yet their efforts to estimate the quality did not reach the general end-user community running heterogeneous applications who assume that their spatial data is error-free and best fitted to the specification standards. The non-specialist, general user group has very limited knowledge how spatial data meets their needs. These parameters forming the external quality dimensions implies that the same data system can be of different quality to different users. The large collection of the observed information is uncertain in a level that can decry the reliability of the applications. Based on prior paper of the authors (in cooperation within the Remote Sensing Data Quality working group of ISPRS), which established a taxonomy on the dimensions of data quality in GIS and remote sensing domains, this paper is aiming at focusing on measures of uncertainty in remote sensing data lifecycle, focusing on land cover mapping issues. In the paper we try to introduce how quality of the various combination of data and procedures can be summarized and how services fit the users' needs. The present paper gives the theoretic overview of the issue, besides selected, practice-oriented approaches are evaluated too, finally widely-used dimension metrics like Root Mean Squared Error (RMSE) or confusion matrix are discussed. The authors present data quality features of well-defined and poorly defined object. The central part of the study is the land cover mapping, describing its accuracy management model, presented relevance and uncertainty measures of its influencing quality dimensions. In the paper theory is supported by a case study, where the remote sensing technology is used for supporting the area-based agricultural subsidies of the European Union, in Hungarian administration.

  18. Georeferencing CAMS data: Polynomial rectification and beyond

    NASA Astrophysics Data System (ADS)

    Yang, Xinghe

    The Calibrated Airborne Multispectral Scanner (CAMS) is a sensor used in the commercial remote sensing program at NASA Stennis Space Center. In geographic applications of the CAMS data, accurate geometric rectification is essential for the analysis of the remotely sensed data and for the integration of the data into Geographic Information Systems (GIS). The commonly used rectification techniques such as the polynomial transformation and ortho rectification have been very successful in the field of remote sensing and GIS for most remote sensing data such as Landsat imagery, SPOT imagery and aerial photos. However, due to the geometric nature of the airborne line scanner which has high spatial frequency distortions, the polynomial model and the ortho rectification technique in current commercial software packages such as Erdas Imagine are not adequate for obtaining sufficient geometric accuracy. In this research, the geometric nature, especially the major distortions, of the CAMS data has been described. An analytical step-by-step geometric preprocessing has been utilized to deal with the potential high frequency distortions of the CAMS data. A generic sensor-independent photogrammetric model has been developed for the ortho-rectification of the CAMS data. Three generalized kernel classes and directional elliptical basis have been formulated into a rectification model of summation of multisurface functions, which is a significant extension to the traditional radial basis functions. The preprocessing mechanism has been fully incorporated into the polynomial, the triangle-based finite element analysis as well as the summation of multisurface functions. While the multisurface functions and the finite element analysis have the characteristics of localization, piecewise logic has been applied to the polynomial and photogrammetric methods, which can produce significant accuracy improvement over the global approach. A software module has been implemented with full integration of data preprocessing and rectification techniques under Erdas Imagine development environment. The final root mean square (RMS) errors for the test CAMS data are about two pixels which are compatible with the random RMS errors existed in the reference map coordinates.

  19. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  20. A High Performance LIA-Based Interface for Battery Powered Sensing Devices

    PubMed Central

    García-Romeo, Daniel; Valero, María R.; Medrano, Nicolás; Calvo, Belén; Celma, Santiago

    2015-01-01

    This paper proposes a battery-compatible electronic interface based on a general purpose lock-in amplifier (LIA) capable of recovering input signals up to the MHz range. The core is a novel ASIC fabricated in 1.8 V 0.18 µm CMOS technology, which contains a dual-phase analog lock-in amplifier consisting of carefully designed building blocks to allow configurability over a wide frequency range while maintaining low power consumption. It operates using square input signals. Hence, for battery-operated microcontrolled systems, where square reference and exciting signals can be generated by the embedded microcontroller, the system benefits from intrinsic advantages such as simplicity, versatility and reduction in power and size. Experimental results confirm the signal recovery capability with signal-to-noise power ratios down to −39 dB with relative errors below 0.07% up to 1 MHz. Furthermore, the system has been successfully tested measuring the response of a microcantilever-based resonant sensor, achieving similar results with better power-bandwidth trade-off compared to other LIAs based on commercial off-the-shelf (COTS) components and commercial LIA equipment. PMID:26437408

  1. A High Performance LIA-Based Interface for Battery Powered Sensing Devices.

    PubMed

    García-Romeo, Daniel; Valero, María R; Medrano, Nicolás; Calvo, Belén; Celma, Santiago

    2015-09-30

    This paper proposes a battery-compatible electronic interface based on a general purpose lock-in amplifier (LIA) capable of recovering input signals up to the MHz range. The core is a novel ASIC fabricated in 1.8 V 0.18 µm CMOS technology, which contains a dual-phase analog lock-in amplifier consisting of carefully designed building blocks to allow configurability over a wide frequency range while maintaining low power consumption. It operates using square input signals. Hence, for battery-operated microcontrolled systems, where square reference and exciting signals can be generated by the embedded microcontroller, the system benefits from intrinsic advantages such as simplicity, versatility and reduction in power and size. Experimental results confirm the signal recovery capability with signal-to-noise power ratios down to -39 dB with relative errors below 0.07% up to 1 MHz. Furthermore, the system has been successfully tested measuring the response of a microcantilever-based resonant sensor, achieving similar results with better power-bandwidth trade-off compared to other LIAs based on commercial off-the-shelf (COTS) components and commercial LIA equipment.

  2. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less

  3. Low Power Operation of Temperature-Modulated Metal Oxide Semiconductor Gas Sensors.

    PubMed

    Burgués, Javier; Marco, Santiago

    2018-01-25

    Mobile applications based on gas sensing present new opportunities for low-cost air quality monitoring, safety, and healthcare. Metal oxide semiconductor (MOX) gas sensors represent the most prominent technology for integration into portable devices, such as smartphones and wearables. Traditionally, MOX sensors have been continuously powered to increase the stability of the sensing layer. However, continuous power is not feasible in many battery-operated applications due to power consumption limitations or the intended intermittent device operation. This work benchmarks two low-power, duty-cycling, and on-demand modes against the continuous power one. The duty-cycling mode periodically turns the sensors on and off and represents a trade-off between power consumption and stability. On-demand operation achieves the lowest power consumption by powering the sensors only while taking a measurement. Twelve thermally modulated SB-500-12 (FIS Inc. Jacksonville, FL, USA) sensors were exposed to low concentrations of carbon monoxide (0-9 ppm) with environmental conditions, such as ambient humidity (15-75% relative humidity) and temperature (21-27 °C), varying within the indicated ranges. Partial Least Squares (PLS) models were built using calibration data, and the prediction error in external validation samples was evaluated during the two weeks following calibration. We found that on-demand operation produced a deformation of the sensor conductance patterns, which led to an increase in the prediction error by almost a factor of 5 as compared to continuous operation (2.2 versus 0.45 ppm). Applying a 10% duty-cycling operation of 10-min periods reduced this prediction error to a factor of 2 (0.9 versus 0.45 ppm). The proposed duty-cycling powering scheme saved up to 90% energy as compared to the continuous operating mode. This low-power mode may be advantageous for applications that do not require continuous and periodic measurements, and which can tolerate slightly higher prediction errors.

  4. Three filters for visualization of phase objects with large variations of phase gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagan, Arkadiusz; Antosiewicz, Tomasz J.; Szoplik, Tomasz

    2009-02-20

    We propose three amplitude filters for visualization of phase objects. They interact with the spectra of pure-phase objects in the frequency plane and are based on tangent and error functions as well as antisymmetric combination of square roots. The error function is a normalized form of the Gaussian function. The antisymmetric square-root filter is composed of two square-root filters to widen its spatial frequency spectral range. Their advantage over other known amplitude frequency-domain filters, such as linear or square-root graded ones, is that they allow high-contrast visualization of objects with large variations of phase gradients.

  5. Limb darkening and exoplanets - II. Choosing the best law for optimal retrieval of transit parameters

    NASA Astrophysics Data System (ADS)

    Espinoza, Néstor; Jordán, Andrés

    2016-04-01

    Very precise measurements of exoplanet transit light curves both from ground- and space-based observatories make it now possible to fit the limb-darkening coefficients in the transit-fitting procedure rather than fix them to theoretical values. This strategy has been shown to give better results, as fixing the coefficients to theoretical values can give rise to important systematic errors which directly impact the physical properties of the system derived from such light curves such as the planetary radius. However, studies of the effect of limb-darkening assumptions on the retrieved parameters have mostly focused on the widely used quadratic limb-darkening law, leaving out other proposed laws that are either simpler or better descriptions of model intensity profiles. In this work, we show that laws such as the logarithmic, square-root and three-parameter law do a better job that the quadratic and linear laws when deriving parameters from transit light curves, both in terms of bias and precision, for a wide range of situations. We therefore recommend to study which law to use on a case-by-case basis. We provide code to guide the decision of when to use each of these laws and select the optimal one in a mean-square error sense, which we note has a dependence on both stellar and transit parameters. Finally, we demonstrate that the so-called exponential law is non-physical as it typically produces negative intensities close to the limb and should therefore not be used.

  6. Prior-knowledge Fitting of Accelerated Five-dimensional Echo Planar J-resolved Spectroscopic Imaging: Effect of Nonlinear Reconstruction on Quantitation.

    PubMed

    Iqbal, Zohaib; Wilson, Neil E; Thomas, M Albert

    2017-07-24

    1 H Magnetic Resonance Spectroscopic imaging (SI) is a powerful tool capable of investigating metabolism in vivo from mul- tiple regions. However, SI techniques are time consuming, and are therefore difficult to implement clinically. By applying non-uniform sampling (NUS) and compressed sensing (CS) reconstruction, it is possible to accelerate these scans while re- taining key spectral information. One recently developed method that utilizes this type of acceleration is the five-dimensional echo planar J-resolved spectroscopic imaging (5D EP-JRESI) sequence, which is capable of obtaining two-dimensional (2D) spectra from three spatial dimensions. The prior-knowledge fitting (ProFit) algorithm is typically used to quantify 2D spectra in vivo, however the effects of NUS and CS reconstruction on the quantitation results are unknown. This study utilized a simulated brain phantom to investigate the errors introduced through the acceleration methods. Errors (normalized root mean square error >15%) were found between metabolite concentrations after twelve-fold acceleration for several low concentra- tion (<2 mM) metabolites. The Cramér Rao lower bound% (CRLB%) values, which are typically used for quality control, were not reflective of the increased quantitation error arising from acceleration. Finally, occipital white (OWM) and gray (OGM) human brain matter were quantified in vivo using the 5D EP-JRESI sequence with eight-fold acceleration.

  7. Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction.

    PubMed

    Heintz, Brittany D; Keenan, Kevin G

    2018-01-01

    Dexterity impairments are well documented in older adults, though it is unclear how these influence touchscreen manipulation. This study examined age-related differences while tracing on high- and low-friction touchscreens using the finger or stylus. 26 young and 24 older adults completed an Archimedes spiral tracing task on a touchscreen mounted on a force sensor. Root mean square error was calculated to quantify performance. Root mean square error increased by 29.9% for older vs. young adults using the fingertip, but was similar to young adults when using the stylus. Although other variables (e.g., touchscreen usage, sensation, and reaction time) differed between age groups, these variables were not related to increased error in older adults while using their fingertip. Root mean square error also increased on the low-friction surface for all subjects. These findings suggest that utilizing a stylus and increasing surface friction may improve touchscreen use in older adults.

  8. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  9. The least-squares mixing models to generate fraction images derived from remote sensing multispectral data

    NASA Technical Reports Server (NTRS)

    Shimabukuro, Yosio Edemir; Smith, James A.

    1991-01-01

    Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.

  10. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  11. Some Results on Mean Square Error for Factor Score Prediction

    ERIC Educational Resources Information Center

    Krijnen, Wim P.

    2006-01-01

    For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…

  12. Weighted linear regression using D2H and D2 as the independent variables

    Treesearch

    Hans T. Schreuder; Michael S. Williams

    1998-01-01

    Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...

  13. The Relationship between Root Mean Square Error of Approximation and Model Misspecification in Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2012-01-01

    The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…

  14. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  15. Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values

    DTIC Science & Technology

    2016-12-01

    UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square  error (MMSE)  estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem.                     3    Introduction      Minimum mean‐ square  error (MMSE) estimation is applied to target imaging with synthetic aperture

  16. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  17. Spatial Statistical Data Fusion (SSDF)

    NASA Technical Reports Server (NTRS)

    Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

    2013-01-01

    As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is fundamentally different than other approaches to data fusion for remote sensing data because it is inferential rather than merely descriptive. All approaches combine data in a way that minimizes some specified loss function. Most of these are more or less ad hoc criteria based on what looks good to the eye, or some criteria that relate only to the data at hand.

  18. Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research.

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Hullett, Craig R.

    2002-01-01

    Alerts communication researchers to potential errors stemming from the use of SPSS (Statistical Package for the Social Sciences) to obtain estimates of eta squared in analysis of variance (ANOVA). Strives to clarify issues concerning the development and appropriate use of eta squared and partial eta squared in ANOVA. Discusses the reporting of…

  19. PIXELS: Using field-based learning to investigate students' concepts of pixels and sense of scale

    NASA Astrophysics Data System (ADS)

    Pope, A.; Tinigin, L.; Petcovic, H. L.; Ormand, C. J.; LaDue, N.

    2015-12-01

    Empirical work over the past decade supports the notion that a high level of spatial thinking skill is critical to success in the geosciences. Spatial thinking incorporates a host of sub-skills such as mentally rotating an object, imagining the inside of a 3D object based on outside patterns, unfolding a landscape, and disembedding critical patterns from background noise. In this study, we focus on sense of scale, which refers to how an individual quantified space, and is thought to develop through kinesthetic experiences. Remote sensing data are increasingly being used for wide-reaching and high impact research. A sense of scale is critical to many areas of the geosciences, including understanding and interpreting remotely sensed imagery. In this exploratory study, students (N=17) attending the Juneau Icefield Research Program participated in a 3-hour exercise designed to study how a field-based activity might impact their sense of scale and their conceptions of pixels in remotely sensed imagery. Prior to the activity, students had an introductory remote sensing lecture and completed the Sense of Scale inventory. Students walked and/or skied the perimeter of several pixel types, including a 1 m square (representing a WorldView sensor's pixel), a 30 m square (a Landsat pixel) and a 500 m square (a MODIS pixel). The group took reflectance measurements using a field radiometer as they physically traced out the pixel. The exercise was repeated in two different areas, one with homogenous reflectance, and another with heterogeneous reflectance. After the exercise, students again completed the Sense of Scale instrument and a demographic survey. This presentation will share the effects and efficacy of the field-based intervention to teach remote sensing concepts and to investigate potential relationships between students' concepts of pixels and sense of scale.

  20. Proprioceptive deficit in individuals with unilateral tearing of the anterior cruciate ligament after active evaluation of the sense of joint position.

    PubMed

    Cossich, Victor; Mallrich, Frédéric; Titonelli, Victor; de Sousa, Eduardo Branco; Velasques, Bruna; Salles, José Inácio

    2014-01-01

    To ascertain whether the proprioceptive deficit in the sense of joint position continues to be present when patients with a limb presenting a deficient anterior cruciate ligament (ACL) are assessed by testing their active reproduction of joint position, in comparison with the contralateral limb. Twenty patients with unilateral ACL tearing participated in the study. Their active reproduction of joint position in the limb with the deficient ACL and in the healthy contralateral limb was tested. Meta-positions of 20% and 50% of the maximum joint range of motion were used. Proprioceptive performance was determined through the values of the absolute error, variable error and constant error. Significant differences in absolute error were found at both of the positions evaluated, and in constant error at 50% of the maximum joint range of motion. When evaluated in terms of absolute error, the proprioceptive deficit continues to be present even when an active evaluation of the sense of joint position is made. Consequently, this sense involves activity of both intramuscular and tendon receptors.

  1. Estimating spatially distributed soil texture using time series of thermal remote sensing - a case study in central Europe

    NASA Astrophysics Data System (ADS)

    Müller, Benjamin; Bernhardt, Matthias; Jackisch, Conrad; Schulz, Karsten

    2016-09-01

    For understanding water and solute transport processes, knowledge about the respective hydraulic properties is necessary. Commonly, hydraulic parameters are estimated via pedo-transfer functions using soil texture data to avoid cost-intensive measurements of hydraulic parameters in the laboratory. Therefore, current soil texture information is only available at a coarse spatial resolution of 250 to 1000 m. Here, a method is presented to derive high-resolution (15 m) spatial topsoil texture patterns for the meso-scale Attert catchment (Luxembourg, 288 km2) from 28 images of ASTER (advanced spaceborne thermal emission and reflection radiometer) thermal remote sensing. A principle component analysis of the images reveals the most dominant thermal patterns (principle components, PCs) that are related to 212 fractional soil texture samples. Within a multiple linear regression framework, distributed soil texture information is estimated and related uncertainties are assessed. An overall root mean squared error (RMSE) of 12.7 percentage points (pp) lies well within and even below the range of recent studies on soil texture estimation, while requiring sparser sample setups and a less diverse set of basic spatial input. This approach will improve the generation of spatially distributed topsoil maps, particularly for hydrologic modeling purposes, and will expand the usage of thermal remote sensing products.

  2. Mapping of Polar Areas Based on High-Resolution Satellite Images: The Example of the Henryk Arctowski Polish Antarctic Station

    NASA Astrophysics Data System (ADS)

    Kurczyński, Zdzisław; Różycki, Sebastian; Bylina, Paweł

    2017-12-01

    To produce orthophotomaps or digital elevation models, the most commonly used method is photogrammetric measurement. However, the use of aerial images is not easy in polar regions for logistical reasons. In these areas, remote sensing data acquired from satellite systems is much more useful. This paper presents the basic technical requirements of different products which can be obtain (in particular orthoimages and digital elevation model (DEM)) using Very-High-Resolution Satellite (VHRS) images. The study area was situated in the vicinity of the Henryk Arctowski Polish Antarctic Station on the Western Shore of Admiralty Bay, King George Island, Western Antarctic. Image processing was applied on two triplets of images acquired by the Pléiades 1A and 1B in March 2013. The results of the generation of orthoimages from the Pléiades systems without control points showed that the proposed method can achieve Root Mean Squared Error (RMSE) of 3-9 m. The presented Pléiades images are useful for thematic remote sensing analysis and processing of measurements. Using satellite images to produce remote sensing products for polar regions is highly beneficial and reliable and compares well with more expensive airborne photographs or field surveys.

  3. An Extended Kriging Method to Interpolate Near-Surface Soil Moisture Data Measured by Wireless Sensor Networks

    PubMed Central

    Zhang, Jialin; Li, Xiuhong; Yang, Rongjin; Liu, Qiang; Zhao, Long; Dou, Baocheng

    2017-01-01

    In the practice of interpolating near-surface soil moisture measured by a wireless sensor network (WSN) grid, traditional Kriging methods with auxiliary variables, such as Co-kriging and Kriging with external drift (KED), cannot achieve satisfactory results because of the heterogeneity of soil moisture and its low correlation with the auxiliary variables. This study developed an Extended Kriging method to interpolate with the aid of remote sensing images. The underlying idea is to extend the traditional Kriging by introducing spectral variables, and operating on spatial and spectral combined space. The algorithm has been applied to WSN-measured soil moisture data in HiWATER campaign to generate daily maps from 10 June to 15 July 2012. For comparison, three traditional Kriging methods are applied: Ordinary Kriging (OK), which used WSN data only, Co-kriging and KED, both of which integrated remote sensing data as covariate. Visual inspections indicate that the result from Extended Kriging shows more spatial details than that of OK, Co-kriging, and KED. The Root Mean Square Error (RMSE) of Extended Kriging was found to be the smallest among the four interpolation results. This indicates that the proposed method has advantages in combining remote sensing information and ground measurements in soil moisture interpolation. PMID:28617351

  4. Field Calibration of Wind Direction Sensor to the True North and Its Application to the Daegwanryung Wind Turbine Test Sites

    PubMed Central

    Lee, Jeong Wan

    2008-01-01

    This paper proposes a field calibration technique for aligning a wind direction sensor to the true north. The proposed technique uses the synchronized measurements of captured images by a camera, and the output voltage of a wind direction sensor. The true wind direction was evaluated through image processing techniques using the captured picture of the sensor with the least square sense. Then, the evaluated true value was compared with the measured output voltage of the sensor. This technique solves the discordance problem of the wind direction sensor in the process of installing meteorological mast. For this proposed technique, some uncertainty analyses are presented and the calibration accuracy is discussed. Finally, the proposed technique was applied to the real meteorological mast at the Daegwanryung test site, and the statistical analysis of the experimental testing estimated the values of stable misalignment and uncertainty level. In a strict sense, it is confirmed that the error range of the misalignment from the exact north could be expected to decrease within the credibility level. PMID:27873957

  5. Quantum-classical boundary for precision optical phase estimation

    NASA Astrophysics Data System (ADS)

    Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo

    2017-12-01

    Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.

  6. A Fabry-Perot Interferometry Based MRI-Compatible Miniature Uniaxial Force Sensor for Percutaneous Needle Placement

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Furlong, Cosme; Fischer, Gregory S.

    2014-01-01

    Robot-assisted surgical procedures, taking advantage of the high soft tissue contrast and real-time imaging of magnetic resonance imaging (MRI), are developing rapidly. However, it is crucial to maintain tactile force feedback in MRI-guided needle-based procedures. This paper presents a Fabry-Perot interference (FPI) based system of an MRI-compatible fiber optic sensor which has been integrated into a piezoelectrically actuated robot for prostate cancer biopsy and brachytherapy in 3T MRI scanner. The opto-electronic sensing system design was minimized to fit inside an MRI-compatible robot controller enclosure. A flexure mechanism was designed that integrates the FPI sensor fiber for measuring needle insertion force, and finite element analysis was performed for optimizing the correct force-deformation relationship. The compact, low-cost FPI sensing system was integrated into the robot and calibration was conducted. The root mean square (RMS) error of the calibration among the range of 0–10 Newton was 0.318 Newton comparing to the theoretical model which has been proven sufficient for robot control and teleoperation. PMID:25126153

  7. Reconstructing spatial-temporal continuous MODIS land surface temperature using the DINEOF method

    NASA Astrophysics Data System (ADS)

    Zhou, Wang; Peng, Bin; Shi, Jiancheng

    2017-10-01

    Land surface temperature (LST) is one of the key states of the Earth surface system. Remote sensing has the capability to obtain high-frequency LST observations with global coverage. However, mainly due to cloud cover, there are always gaps in the remotely sensed LST product, which hampers the application of satellite-based LST in data-driven modeling of surface energy and water exchange processes. We explored the suitability of the data interpolating empirical orthogonal functions (DINEOF) method in moderate resolution imaging spectroradiometer LST reconstruction around Ali on the Tibetan Plateau. To validate the reconstruction accuracy, synthetic clouds during both daytime and nighttime are created. With DINEOF reconstruction, the root mean square error and bias under synthetic clouds in daytime are 4.57 and -0.0472 K, respectively, and during the nighttime are 2.30 and 0.0045 K, respectively. The DINEOF method can well recover the spatial pattern of LST. Time-series analysis of LST before and after DINEOF reconstruction from 2002 to 2016 shows that the annual and interannual variabilities of LST can be well reconstructed by the DINEOF method.

  8. POOLMS: A computer program for fitting and model selection for two level factorial replication-free experiments

    NASA Technical Reports Server (NTRS)

    Amling, G. E.; Holms, A. G.

    1973-01-01

    A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.

  9. Validating Clusters with the Lower Bound for Sum-of-Squares Error

    ERIC Educational Resources Information Center

    Steinley, Douglas

    2007-01-01

    Given that a minor condition holds (e.g., the number of variables is greater than the number of clusters), a nontrivial lower bound for the sum-of-squares error criterion in K-means clustering is derived. By calculating the lower bound for several different situations, a method is developed to determine the adequacy of cluster solution based on…

  10. A suggestion for computing objective function in model calibration

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang

    2014-01-01

    A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).

  11. Evaluation of Cartosat-1 Multi-Scale Digital Surface Modelling Over France

    PubMed Central

    Gianinetto, Marco

    2009-01-01

    On 5 May 2005, the Indian Space Research Organization launched Cartosat-1, the eleventh satellite of its constellation, dedicated to the stereo viewing of the Earth's surface for terrain modeling and large-scale mapping, from the Satish Dhawan Space Centre (India). In early 2006, the Indian Space Research Organization started the Cartosat-1 Scientific Assessment Programme, jointly established with the International Society for Photogrammetry and Remote Sensing. Within this framework, this study evaluated the capabilities of digital surface modeling from Cartosat-1 stereo data for the French test sites of Mausanne les Alpilles and Salon de Provence. The investigation pointed out that for hilly territories it is possible to produce high-resolution digital surface models with a root mean square error less than 7.1 m and a linear error at 90% confidence level less than 9.5 m. The accuracy of the generated digital surface models also fulfilled the requirements of the French Reference 3D®, so Cartosat-1 data may be used to produce or update such kinds of products. PMID:22412311

  12. Bandwidth efficient channel estimation method for airborne hyperspectral data transmission in sparse doubly selective communication channels

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.

    2017-10-01

    A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.

  13. Lunar gravitational field estimation and the effects of mismodeling upon lunar satellite orbit prediction. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Davis, John H.

    1993-01-01

    Lunar spherical harmonic gravity coefficients are estimated from simulated observations of a near-circular low altitude polar orbiter disturbed by lunar mascons. Lunar gravity sensing missions using earth-based nearside observations with and without satellite-based far-side observations are simulated and least squares maximum likelihood estimates are developed for spherical harmonic expansion fit models. Simulations and parameter estimations are performed by a modified version of the Smithsonian Astrophysical Observatory's Planetary Ephemeris Program. Two different lunar spacecraft mission phases are simulated to evaluate the estimated fit models. Results for predicting state covariances one orbit ahead are presented along with the state errors resulting from the mismodeled gravity field. The position errors from planning a lunar landing maneuver with a mismodeled gravity field are also presented. These simulations clearly demonstrate the need to include observations of satellite motion over the far side in estimating the lunar gravity field. The simulations also illustrate that the eighth degree and order expansions used in the simulated fits were unable to adequately model lunar mascons.

  14. A noble technique a using force-sensing resistor for immobilization-device quality assurance: A feasibility study

    NASA Astrophysics Data System (ADS)

    Cho, Min-Seok; Kim, Tae-Ho; Kang, Seong-Hee; Kim, Dong-Su; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Noh, Yu-Yun; Koo, Hyun-Jae; Cheon, Geum Seong; Suh, Tae Suk; Kim, Siyong

    2016-03-01

    Many studies have reported that a patient can move even when an immobilization device is used. Researchers have developed an immobilization-device quality-assurance (QA) system that evaluates the validity of immobilization devices. The QA system consists of force-sensing-resistor (FSR) sensor units, an electric circuit, a signal conditioning device, and a control personal computer (PC) with in-house software. The QA system is designed to measure the force between an immobilization device and a patient's skin by using the FSR sensor unit. This preliminary study aimed to evaluate the feasibility of using the QA system in radiation-exposure situations. When the FSR sensor unit was irradiated with a computed tomography (CT) beam and a treatment beam from a linear accelerator (LINAC), the stability of the output signal, the image artifact on the CT image, and changing the variation on the patient's dose were tested. The results of this study demonstrate that this system is promising in that it performed within the error range (signal variation on CT beam < 0.30 kPa, root-mean-square error (RMSE) of the two CT images according to presence or absence of the FSR sensor unit < 15 HU, signal variation on the treatment beam < 0.15 kPa, and dose difference between the presence and the absence of the FSR sensor unit < 0.02%). Based on the obtained results, we will volunteer tests to investigate the clinical feasibility of the QA system.

  15. The analytical design of spectral measurements for multispectral remote sensor systems

    NASA Technical Reports Server (NTRS)

    Wiersma, D. J.; Landgrebe, D. A. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. In order to choose a design which will be optimal for the largest class of remote sensing problems, a method was developed which attempted to represent the spectral response function from a scene as accurately as possible. The performance of the overall recognition system was studied relative to the accuracy of the spectral representation. The spectral representation was only one of a set of five interrelated parameter categories which also included the spatial representation parameter, the signal to noise ratio, ancillary data, and information classes. The spectral response functions observed from a stratum were modeled as a stochastic process with a Gaussian probability measure. The criterion for spectral representation was defined by the minimum expected mean-square error.

  16. Signal processing methodologies for an acoustic fetal heart rate monitor

    NASA Technical Reports Server (NTRS)

    Pretlow, Robert A., III; Stoughton, John W.

    1992-01-01

    Research and development is presented of real time signal processing methodologies for the detection of fetal heart tones within a noise-contaminated signal from a passive acoustic sensor. A linear predictor algorithm is utilized for detection of the heart tone event and additional processing derives heart rate. The linear predictor is adaptively 'trained' in a least mean square error sense on generic fetal heart tones recorded from patients. A real time monitor system is described which outputs to a strip chart recorder for plotting the time history of the fetal heart rate. The system is validated in the context of the fetal nonstress test. Comparisons are made with ultrasonic nonstress tests on a series of patients. Comparative data provides favorable indications of the feasibility of the acoustic monitor for clinical use.

  17. [Formula: see text]-regularized recursive total least squares based sparse system identification for the error-in-variables.

    PubMed

    Lim, Jun-Seok; Pang, Hee-Suk

    2016-01-01

    In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.

  18. Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.

    PubMed

    Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke

    2018-04-29

    The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.

  19. An algorithm for propagating the square-root covariance matrix in triangular form

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Choe, C. Y.

    1976-01-01

    A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.

  20. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  1. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Treesearch

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  3. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  4. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.

  5. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting?

    PubMed Central

    McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.

    2016-01-01

    The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821

  6. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  7. An Inequality Constrained Least-Squares Approach as an Alternative Estimation Procedure for Atmospheric Parameters from VLBI Observations

    NASA Astrophysics Data System (ADS)

    Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel

    2016-12-01

    On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.

  8. Voltammetric determination of tartaric acid in wines by electrocatalytic oxidation on a cobalt(II)-phthalocyanine-modified electrode associated with multiway calibration.

    PubMed

    Lourenço, Anabel S; Nascimento, Raphael F; Silva, Amanda C; Ribeiro, Williame F; Araujo, Mario C U; Oliveira, Severino C B; Nascimento, Valberes B

    2018-05-30

    The electrocatalytic oxidation of tartaric acid on a carbon paste electrode modified with cobalt (II)-phthalocyanine was demonstrated and applied to the development of a highly sensitive, simple, fast and inexpensive voltammetric sensor to determine tartaric acid. The electrochemical behavior of the modified electrode was investigated by cyclic and square wave voltammetry, and the effect of experimental variables, such as dispersion and loading of cobalt (II)-phthalocyanine, together with optimum conditions for sensing the analyte by square wave voltammetry were assessed. In addition, the absence of a significant memory effect combined with the ease of electrode preparation led to the development of a sensitive and direct method to determine tartaric acid in wines. Interferences from other low molecular weight organic acids commonly present in wines were circumvented by using a multiway calibration technique, successfully obtaining the second order advantage by modeling voltammetric data with unfolded partial least square with residual bilinearization (U-PLS/RBL). A linear response range between 10 and 100 μmol L -1 (r = 0.9991), a relative prediction error of 4.55% and a recovery range from 96.41 to 102.43% were obtained. The proposed method is non-laborious, since it does not use sample pretreatment such as filtration, extraction, pre-concentration or cleanup procedures. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Application of Rapid Visco Analyser (RVA) viscograms and chemometrics for maize hardness characterisation.

    PubMed

    Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena

    2015-04-15

    It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. The Relationship between Mean Square Differences and Standard Error of Measurement: Comment on Barchard (2012)

    ERIC Educational Resources Information Center

    Pan, Tianshu; Yin, Yue

    2012-01-01

    In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…

  11. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  12. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information; with a section on theory and application of generalized least squares

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1987-01-01

    This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.

  13. An Application of Interactive Computer Graphics to the Study of Inferential Statistics and the General Linear Model

    DTIC Science & Technology

    1991-09-01

    matrix, the Regression Sum of Squares (SSR) and Error Sum of Squares (SSE) are also displayed as a percentage of the Total Sum of Squares ( SSTO ...vector when the student compares the SSR to the SSE. In addition to the plot, the actual values of SSR, SSE, and SSTO are also provided. Figure 3 gives the...Es ainSpace = E 3 Error- Eor Space =n t! L . Pro~cio q Yonto Pro~rct on of Y onto the simaton, pac ror Space SSR SSEL0.20 IV = 14,1 +IErrorI 2 SSTO

  14. Engineering a Large Scale Indium Nanodot Array for Refractive Index Sensing.

    PubMed

    Xu, Xiaoqing; Hu, Xiaolin; Chen, Xiaoshu; Kang, Yangsen; Zhang, Zhiping; B Parizi, Kokab; Wong, H-S Philip

    2016-11-23

    In this work, we developed a simple method to fabricate 12 × 4 mm 2 large scale nanostructure arrays and investigated the feasibility of indium nanodot (ND) array with different diameters and periods for refractive index sensing. Absorption resonances at multiple wavelengths from the visible to the near-infrared range were observed for various incident angles in a variety of media. Engineering the ND array with a centered square lattice, we successfully enhanced the sensitivity by 60% and improved the figure of merit (FOM) by 190%. The evolution of the resonance dips in the reflection spectra, of square lattice and centered square lattice, from air to water, matches well with the results of Lumerical FDTD simulation. The improvement of sensitivity is due to the enhancement of local electromagnetic field (E-field) near the NDs with centered square lattice, as revealed by E-field simulation at resonance wavelengths. The E-field is enhanced due to coupling between the two square ND arrays with [Formula: see text]x period at phase matching. This work illustrates an effective way to engineer and fabricate a refractive index sensor at a large scale. This is the first experimental demonstration of poor-metal (indium) nanostructure array for refractive index sensing. It also demonstrates a centered square lattice for higher sensitivity and as a better basic platform for more complex sensor designs.

  15. Efficient DV-HOP Localization for Wireless Cyber-Physical Social Sensing System: A Correntropy-Based Neural Network Learning Scheme

    PubMed Central

    Xu, Yang; Luo, Xiong; Wang, Weiping; Zhao, Wenbing

    2017-01-01

    Integrating wireless sensor network (WSN) into the emerging computing paradigm, e.g., cyber-physical social sensing (CPSS), has witnessed a growing interest, and WSN can serve as a social network while receiving more attention from the social computing research field. Then, the localization of sensor nodes has become an essential requirement for many applications over WSN. Meanwhile, the localization information of unknown nodes has strongly affected the performance of WSN. The received signal strength indication (RSSI) as a typical range-based algorithm for positioning sensor nodes in WSN could achieve accurate location with hardware saving, but is sensitive to environmental noises. Moreover, the original distance vector hop (DV-HOP) as an important range-free localization algorithm is simple, inexpensive and not related to the environment factors, but performs poorly when lacking anchor nodes. Motivated by these, various improved DV-HOP schemes with RSSI have been introduced, and we present a new neural network (NN)-based node localization scheme, named RHOP-ELM-RCC, through the use of DV-HOP, RSSI and a regularized correntropy criterion (RCC)-based extreme learning machine (ELM) algorithm (ELM-RCC). Firstly, the proposed scheme employs both RSSI and DV-HOP to evaluate the distances between nodes to enhance the accuracy of distance estimation at a reasonable cost. Then, with the help of ELM featured with a fast learning speed with a good generalization performance and minimal human intervention, a single hidden layer feedforward network (SLFN) on the basis of ELM-RCC is used to implement the optimization task for obtaining the location of unknown nodes. Since the RSSI may be influenced by the environmental noises and may bring estimation error, the RCC instead of the mean square error (MSE) estimation, which is sensitive to noises, is exploited in ELM. Hence, it may make the estimation more robust against outliers. Additionally, the least square estimation (LSE) in ELM is replaced by the half-quadratic optimization technique. Simulation results show that our proposed scheme outperforms other traditional localization schemes. PMID:28085084

  16. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  17. Identification and compensation of the temperature influences in a miniature three-axial accelerometer based on the least squares method

    NASA Astrophysics Data System (ADS)

    Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae

    2017-06-01

    The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.

  18. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  19. Contribution Of The SWOT Mission To Large-Scale Hydrological Modeling Using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Rochoux, M. C.; Garambois, P. A.; Paris, A.; Calmant, S.

    2016-12-01

    The purpose of this work is to improve water fluxes estimation on the continental surfaces, at interanual and interseasonal scale (from few years to decennial time period). More specifically, it studies contribution of the incoming SWOT satellite mission to improve hydrology model at global scale, and using the land surface model ISBA-TRIP. This model corresponds to the continental component of the CNRM (French meteorological research center)'s climatic model. This study explores the potential of satellite data to correct either input parameters of the river routing scheme TRIP or its state variables. To do so, a data assimilation platform (using an Ensemble Kalman Filter, EnKF) has been implemented to assimilate SWOT virtual observations as well as discharges estimated from real nadir altimetry data. A series of twin experiments is used to test and validate the parameter estimation module of the platform. SWOT virtual-observations of water heights along SWOT tracks (with a 10 cm white noise model error) are assimilated to correct the river routing model parameters. To begin with, we chose to focus exclusively on the river manning coefficient, with the possibility to easily extend to other parameters such as the river widths. First results show that the platform is able to recover the "true" Manning distribution assimilating SWOT-like water heights. The error on the coefficients goes from 35 % before assimilation to 9 % after four SWOT orbit repeat period of 21 days. In the state estimation mode, daily assimilation cycles are realized to correct TRIP river water storage initial state by assimilating ENVISAT-based discharge. Those observations are derived from ENVISAT water elevation measures, using rating curves from the MGB-IPH hydrological model (calibrated over the Amazon using in situ gages discharge). Using such kind of observation allows going beyond idealized twin experiments and also to test contribution of a remotely-sensed discharge product, which could prefigure the SWOT discharge product. The results show that discharge after assimilation are globally improved : the root-mean-square error between the analysis discharge ensemble mean and in situ discharges is reduced by 30 %, compared to the root-mean-square error between the free run and in situ discharges.

  20. Intrinsic Raman spectroscopy for quantitative biological spectroscopy Part II

    PubMed Central

    Bechtel, Kate L.; Shih, Wei-Chuan; Feld, Michael S.

    2009-01-01

    We demonstrate the effectiveness of intrinsic Raman spectroscopy (IRS) at reducing errors caused by absorption and scattering. Physical tissue models, solutions of varying absorption and scattering coefficients with known concentrations of Raman scatterers, are studied. We show significant improvement in prediction error by implementing IRS to predict concentrations of Raman scatterers using both ordinary least squares regression (OLS) and partial least squares regression (PLS). In particular, we show that IRS provides a robust calibration model that does not increase in error when applied to samples with optical properties outside the range of calibration. PMID:18711512

  1. Intelligent sensing sensory quality of Chinese rice wine using near infrared spectroscopy and nonlinear tools

    NASA Astrophysics Data System (ADS)

    Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen

    2016-02-01

    The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp = 0.9180 and RMSEP = 2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine.

  2. A Least-Squares Commutator in the Iterative Subspace Method for Accelerating Self-Consistent Field Convergence.

    PubMed

    Li, Haichen; Yaron, David J

    2016-11-08

    A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.

  3. Error Analyses of the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  4. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  5. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    PubMed

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  6. Theoretical and experimental studies of error in square-law detector circuits

    NASA Technical Reports Server (NTRS)

    Stanley, W. D.; Hearn, C. P.; Williams, J. B.

    1984-01-01

    Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.

  7. Asynchronous State Estimation for Discrete-Time Switched Complex Networks With Communication Constraints.

    PubMed

    Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li

    2018-05-01

    This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.

  8. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  9. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  10. An active co-phasing imaging testbed with segmented mirrors

    NASA Astrophysics Data System (ADS)

    Zhao, Weirui; Cao, Genrui

    2011-06-01

    An active co-phasing imaging testbed with high accurate optical adjustment and control in nanometer scale was set up to validate the algorithms of piston and tip-tilt error sensing and real-time adjusting. Modularization design was adopted. The primary mirror was spherical and divided into three sub-mirrors. One of them was fixed and worked as reference segment, the others were adjustable respectively related to the fixed segment in three freedoms (piston, tip and tilt) by using sensitive micro-displacement actuators in the range of 15mm with a resolution of 3nm. The method of twodimension dispersed fringe analysis was used to sense the piston error between the adjacent segments in the range of 200μm with a repeatability of 2nm. And the tip-tilt error was gained with the method of centroid sensing. Co-phasing image could be realized by correcting the errors measured above with the sensitive micro-displacement actuators driven by a computer. The process of co-phasing error sensing and correcting could be monitored in real time by a scrutiny module set in this testbed. A FISBA interferometer was introduced to evaluate the co-phasing performance, and finally a total residual surface error of about 50nm rms was achieved.

  11. Did I Do That? Expectancy Effects of Brain Stimulation on Error-related Negativity and Sense of Agency.

    PubMed

    Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel

    2018-06-19

    This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.

  12. Quantifying vegetation distribution and structure using high resolution drone-based structure-from-motion photogrammetry

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Okin, G.

    2017-12-01

    Vegetation is one of the most important driving factors of different ecosystem processes in drylands. The structure of vegetation controls the spatial distribution of moisture and heat in the canopy and the surrounding area. Also, the structure of vegetation influences both airflow and boundary layer resistance above the land surface. Multispectral satellite remote sensing has been widely used to monitor vegetation coverage and its change; however, it can only capture 2D images, which do not contain the vertical information of vegetation. In situ observation uses different methods to measure the structure of vegetation, and their results are accurate; however, these methods are laborious and time-consuming, and susceptible to undersampling in spatial heterogeneity. Drylands are sparsely covered by short plants, which allows the drone fly at a relatively low height to obtain ultra-high resolution images. Structure-from-motion (SfM) is a photogrammetric method that was proved to produce 3D model based on 2D images. Drone-based remote sensing can obtain the multiangle images for one object, which can be used to constructed 3D models of vegetation in drylands. Using these images detected by the drone, the orthomosaics and digital surface model (DSM) can be built. In this study, the drone-based remote sensing was conducted in Jornada Basin, New Mexico, in the spring of 2016 and 2017, and three derived vegetation parameters (i.e., canopy size, bare soil gap size, and plant height) were compared with those obtained with field measurement. The correlation coefficient of canopy size, bare soil gap size, and plant height between drone images and field data are 0.91, 0.96, and 0.84, respectively. The two-year averaged root-mean-square error (RMSE) of canopy size, bare soil gap size, and plant height between drone images and field data are 0.61 m, 1.21 m, and 0.25 cm, respectively. The two-year averaged measure error (ME) of canopy size, bare soil gap size, and plant height between drone images and field data are 0.02 m, -0.03, and -0.1 m, respectively. These results indicate a good agreement between drone-based remote sensing and field measurement.

  13. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks

    PubMed Central

    Mustapha, Ibrahim; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A.; Sali, Aduwati; Mohamad, Hafizal

    2015-01-01

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach. PMID:26287191

  14. Remote sensing of key grassland nutrients using hyperspectral techniques in KwaZulu-Natal, South Africa

    NASA Astrophysics Data System (ADS)

    Singh, Leeth; Mutanga, Onisimo; Mafongoya, Paramu; Peerbhay, Kabir

    2017-07-01

    The concentration of forage fiber content is critical in explaining the palatability of forage quality for livestock grazers in tropical grasslands. Traditional methods of determining forage fiber content are usually time consuming, costly, and require specialized laboratory analysis. With the potential of remote sensing technologies, determination of key fiber attributes can be made more accurately. This study aims to determine the effectiveness of known absorption wavelengths for detecting forage fiber biochemicals, neutral detergent fiber, acid detergent fiber, and lignin using hyperspectral data. Hyperspectral reflectance spectral measurements (350 to 2500 nm) of grass were collected and implemented within the random forest (RF) ensemble. Results show successful correlations between the known absorption features and the biochemicals with coefficients of determination (R2) ranging from 0.57 to 0.81 and root mean square errors ranging from 6.97 to 3.03 g/kg. In comparison, using the entire dataset, the study identified additional wavelengths for detecting fiber biochemicals, which contributes to the accurate determination of forage quality in a grassland environment. Overall, the results showed that hyperspectral remote sensing in conjunction with the competent RF ensemble could discriminate each key biochemical evaluated. This study shows the potential to upscale the methodology to a space-borne multispectral platform with similar spectral configurations for an accurate and cost effective mapping analysis of forage quality.

  15. An Energy-Efficient Spectrum-Aware Reinforcement Learning-Based Clustering Algorithm for Cognitive Radio Sensor Networks.

    PubMed

    Mustapha, Ibrahim; Mohd Ali, Borhanuddin; Rasid, Mohd Fadlee A; Sali, Aduwati; Mohamad, Hafizal

    2015-08-13

    It is well-known that clustering partitions network into logical groups of nodes in order to achieve energy efficiency and to enhance dynamic channel access in cognitive radio through cooperative sensing. While the topic of energy efficiency has been well investigated in conventional wireless sensor networks, the latter has not been extensively explored. In this paper, we propose a reinforcement learning-based spectrum-aware clustering algorithm that allows a member node to learn the energy and cooperative sensing costs for neighboring clusters to achieve an optimal solution. Each member node selects an optimal cluster that satisfies pairwise constraints, minimizes network energy consumption and enhances channel sensing performance through an exploration technique. We first model the network energy consumption and then determine the optimal number of clusters for the network. The problem of selecting an optimal cluster is formulated as a Markov Decision Process (MDP) in the algorithm and the obtained simulation results show convergence, learning and adaptability of the algorithm to dynamic environment towards achieving an optimal solution. Performance comparisons of our algorithm with the Groupwise Spectrum Aware (GWSA)-based algorithm in terms of Sum of Square Error (SSE), complexity, network energy consumption and probability of detection indicate improved performance from the proposed approach. The results further reveal that an energy savings of 9% and a significant Primary User (PU) detection improvement can be achieved with the proposed approach.

  16. Smartphone-based simultaneous pH and nitrite colorimetric determination for paper microfluidic devices.

    PubMed

    Lopez-Ruiz, Nuria; Curto, Vincenzo F; Erenas, Miguel M; Benito-Lopez, Fernando; Diamond, Dermot; Palma, Alberto J; Capitan-Vallvey, Luis F

    2014-10-07

    In this work, an Android application for measurement of nitrite concentration and pH determination in combination with a low-cost paper-based microfluidic device is presented. The application uses seven sensing areas, containing the corresponding immobilized reagents, to produce selective color changes when a sample solution is placed in the sampling area. Under controlled conditions of light, using the flash of the smartphone as a light source, the image captured with the built-in camera is processed using a customized algorithm for multidetection of the colored sensing areas. The developed image-processing allows reducing the influence of the light source and the positioning of the microfluidic device in the picture. Then, the H (hue) and S (saturation) coordinates of the HSV color space are extracted and related to pH and nitrite concentration, respectively. A complete characterization of the sensing elements has been carried out as well as a full description of the image analysis for detection. The results show good use of a mobile phone as an analytical instrument. For the pH, the resolution obtained is 0.04 units of pH, 0.09 of accuracy, and a mean squared error of 0.167. With regard to nitrite, 0.51% at 4.0 mg L(-1) of resolution and 0.52 mg L(-1) as the limit of detection was achieved.

  17. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    PubMed

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.

  18. Airborne remote sensing in precision viticolture: assessment of quality and quantity vineyard production using multispectral imagery: a case study in Velletri, Rome surroundings (central Italy)

    NASA Astrophysics Data System (ADS)

    Tramontana, Gianluca; Papale, Dario; Girard, Filippo; Belli, Claudio; Pietromarchi, Paolo; Tiberi, Domenico; Comandini, Maria C.

    2009-09-01

    During 2008 an experimental study aimed to investigate the capabilities of a new Airborne Remote sensing platform as an aid in precision viticulture was conducted. The study was carried out on 2 areas located in the town of Velletri, near Rome; the acquisitions were conducted on 07-08-2008 and on 09-09-2008, using ASPIS (Advanced Spectroscopic Imager System) the new airborne multispectral sensor, capable to acquire 12 narrow spectral bands (10 nm) located in the visible and near-infrared region. Several vegetation indices, for a total of 22 independent variables, were tested for the estimation of different oenological parameters. Anova test showed that several oenochemical parameters, such as sugars and acidity, differ according to the variety taken into consideration. The remotely sensed data were significantly correlated with the following oenochemical parameters: Leaf Surface Exposed (SFE) (correlation coefficient R2 ~ 0.8), wood pruning (R2 ~ 0.8), reducing sugars (R2 ~ 0.6 and Root Mean Square Error ~ 5g/l), total acidity (R2 ~ 0.6 and RMSE ~ 0.5 g/l), polyphenols (R2~ 0.9) and anthocyanins content (R2 ~ 0.89) in order to provide "prescriptives" thematic maps related to the oenological variables of interest, the relationships previously carried out have been applied to the vegetation indices.

  19. An Alternating Least Squares Method for the Weighted Approximation of a Symmetric Matrix.

    ERIC Educational Resources Information Center

    ten Berge, Jos M. F.; Kiers, Henk A. L.

    1993-01-01

    R. A. Bailey and J. C. Gower explored approximating a symmetric matrix "B" by another, "C," in the least squares sense when the squared discrepancies for diagonal elements receive specific nonunit weights. A solution is proposed where "C" is constrained to be positive semidefinite and of a fixed rank. (SLD)

  20. Electromagnetic tracking system with reduced distortion using quadratic excitation.

    PubMed

    Bien, Tomasz; Li, Mengfei; Salah, Zein; Rose, Georg

    2014-03-01

    Electromagnetic tracking systems, frequently used in minimally invasive surgery, are affected by conductive distorters. The influence of conductive distorters on electromagnetic tracking system accuracy can be reduced through magnetic field modifications. This approach was developed and tested. The voltage induced directly by the emitting coil in the sensing coil without additional influence by the conductive distorter depends on the first derivative of the voltage on the emitting coil. The voltage which is induced indirectly by the emitting coil across the conductive distorter in the sensing coil, however, depends on the second derivative of the voltage on the emitting coil. The electromagnetic tracking system takes advantage of this difference by supplying the emitting coil with a quadratic excitation voltage. The method is adaptive relative to the amount of distortion cause by the conductive distorters. This approach is evaluated with an experimental setup of the electromagnetic tracking system. In vitro testing showed that the maximal error decreased from 10.9 to 3.8 mm when the quadratic voltage was used to excite the emitting coil instead of the sinusoidal voltage. Furthermore, the root mean square error in the proximity of the aluminum disk used as a conductive distorter was reduced from 3.5 to 1.6 mm when the electromagnetic tracking system used the quadratic instead of sinusoidal excitation. Electromagnetic tracking with quadratic excitation is immune to the effects of a conductive distorter, especially compared with sinusoidal excitation of the emitting coil. Quadratic excitation of electromagnetic tracking for computer-assisted surgery is promising for clinical applications.

  1. Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.

    PubMed

    Felt, Wyatt; Chin, Khai Yi; Remy, C David

    2017-09-01

    This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.

  2. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  3. The coronagraphic Modal Wavefront Sensor: a hybrid focal-plane sensor for the high-contrast imaging of circumstellar environments

    NASA Astrophysics Data System (ADS)

    Wilby, M. J.; Keller, C. U.; Snik, F.; Korkiakoski, V.; Pietrow, A. G. M.

    2017-01-01

    The raw coronagraphic performance of current high-contrast imaging instruments is limited by the presence of a quasi-static speckle (QSS) background, resulting from instrumental Non-Common Path Errors (NCPEs). Rapid development of efficient speckle subtraction techniques in data reduction has enabled final contrasts of up to 10-6 to be obtained, however it remains preferable to eliminate the underlying NCPEs at the source. In this work we introduce the coronagraphic Modal Wavefront Sensor (cMWS), a new wavefront sensor suitable for real-time NCPE correction. This combines the Apodizing Phase Plate (APP) coronagraph with a holographic modal wavefront sensor to provide simultaneous coronagraphic imaging and focal-plane wavefront sensing with the science point-spread function. We first characterise the baseline performance of the cMWS via idealised closed-loop simulations, showing that the sensor is able to successfully recover diffraction-limited coronagraph performance over an effective dynamic range of ±2.5 radians root-mean-square (rms) wavefront error within 2-10 iterations, with performance independent of the specific choice of mode basis. We then present the results of initial on-sky testing at the William Herschel Telescope, which demonstrate that the sensor is capable of NCPE sensing under realistic seeing conditions via the recovery of known static aberrations to an accuracy of 10 nm (0.1 radians) rms error in the presence of a dominant atmospheric speckle foreground. We also find that the sensor is capable of real-time measurement of broadband atmospheric wavefront variance (50% bandwidth, 158 nm rms wavefront error) at a cadence of 50 Hz over an uncorrected telescope sub-aperture. When combined with a suitable closed-loop adaptive optics system, the cMWS holds the potential to deliver an improvement of up to two orders of magnitude over the uncorrected QSS floor. Such a sensor would be eminently suitable for the direct imaging and spectroscopy of exoplanets with both existing and future instruments, including EPICS and METIS for the E-ELT.

  4. Application of Remote Sensing in Building Damages Assessment after Moderate and Strong Earthquake

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Zhang, J.; Dou, A.

    2003-04-01

    - Earthquake is a main natural disaster in modern society. However, we still cannot predict the time and place of its occurrence accurately. Then it is of much importance to survey the damages information when an earthquake occurs, which can help us to mitigate losses and implement fast damage evaluation. In this paper, we use remote sensing techniques for our purposes. Remotely sensed satellite images often view a large scale of land at a time. There are several kinds of satellite images, which of different spatial and spectral resolutions. Landsat-4/5 TM sensor can view ground at 30m resolution, while Landsat-7 ETM Plus has a resolution of 15m in panchromatic waveband. SPOT satellite can provide images with higher resolutions. Those images obtained pre- and post-earthquake can help us greatly in identifying damages of moderate and large-size buildings. In this paper, we bring forward a method to implement quick damages assessment by analyzing both pre- and post-earthquake satellite images. First, those images are geographically registered together with low RMS (Root Mean Square) error. Then, we clip out residential areas by overlaying images with existing vector layers through Geographic Information System (GIS) software. We present a new change detection algorithm to quantitatively identify damages degree. An empirical or semi-empirical model is then established by analyzing the real damage degree and changes of pixel values of the same ground objects. Experimental result shows that there is a good linear relationship between changes of pixel values and ground damages, which proves the potentials of remote sensing in post-quake fast damage assessment. Keywords: Damages Assessment, Earthquake Hazard, Remote Sensing

  5. A feed-forward Hopfield neural network algorithm (FHNNA) with a colour satellite image for water quality mapping

    NASA Astrophysics Data System (ADS)

    Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar

    2016-06-01

    There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.

  6. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  7. High-resolution spatial databases of monthly climate variables (1961-2010) over a complex terrain region in southwestern China

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Xu, An-Ding; Liu, Hong-Bin

    2015-01-01

    Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.

  8. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  9. Remotely-sensed near real-time monitoring of reservoir storage in India

    NASA Astrophysics Data System (ADS)

    Tiwari, A. D.; Mishra, V.

    2017-12-01

    Real-time reservoir storage information at a high temporal resolution is crucial to mitigate the influence of extreme events like floods and droughts. Despite large implications of near real-time reservoir monitoring in India for water resources and irrigation, remotely sensed monitoring systems have been lacking. Here we develop remotely sensed real-time monitoring systems for 91 large reservoirs in India for the period from 2000 to 2017. For the reservoir storage estimation, we combined Moderate Resolution Imaging Spectroradiometer (MODIS) 8-day 250 m Enhanced Vegetation Index (EVI), and Geoscience Laser Altimeter System (GLAS) onboard the Ice, Cloud, and land Elevation Satellite (ICESat) ICESat/GLAS elevation data. Vegetation data with the highest temporal resolution available from the MODIS is at 16 days. To increase the temporal resolution to 8 days, we developed the 8-day composite of near infrared, red, and blue band surface reflectance. Surface reflectance 8-Day L3 Global 250m only have NIR band and Red band, therefore, surface reflectance of 8-Day L3 Global at 500m is used for the blue band, which was regridded to 250m spatial resolution. An area-elevation relationship was derived using area from an unsupervised classification of MODIS image followed by an image enhancement and elevation data from ICESat/GLAS. A trial and error method was used to obtain the area-elevation relationship for those reservoirs for which ICESat/GLAS data is not available. The reservoir storages results were compared with the gauge storage data from 2002 to 2009 (training period), which were then evaluated for the period of 2010 to 2016. Our storage estimates were highly correlated with observations (R2 = 0.6 to 0.96), and the normalized root mean square error (NRMSE) ranged between 10% and 50%. We also developed a relationship between precipitation and reservoir storage that can be used for prediction of storage during the dry season.

  10. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  11. An algorithm for hyperspectral remote sensing of aerosols: 1. Development of theoretical framework

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Wang, Jun; Xu, Xiaoguang; Reid, Jeffrey S.; Han, Dong

    2016-07-01

    This paper describes the first part of a series of investigations to develop algorithms for simultaneous retrieval of aerosol parameters and surface reflectance from a newly developed hyperspectral instrument, the GEOstationary Trace gas and Aerosol Sensor Optimization (GEO-TASO), by taking full advantage of available hyperspectral measurement information in the visible bands. We describe the theoretical framework of an inversion algorithm for the hyperspectral remote sensing of the aerosol optical properties, in which major principal components (PCs) for surface reflectance is assumed known, and the spectrally dependent aerosol refractive indices are assumed to follow a power-law approximation with four unknown parameters (two for real and two for imaginary part of refractive index). New capabilities for computing the Jacobians of four Stokes parameters of reflected solar radiation at the top of the atmosphere with respect to these unknown aerosol parameters and the weighting coefficients for each PC of surface reflectance are added into the UNified Linearized Vector Radiative Transfer Model (UNL-VRTM), which in turn facilitates the optimization in the inversion process. Theoretical derivations of the formulas for these new capabilities are provided, and the analytical solutions of Jacobians are validated against the finite-difference calculations with relative error less than 0.2%. Finally, self-consistency check of the inversion algorithm is conducted for the idealized green-vegetation and rangeland surfaces that were spectrally characterized by the U.S. Geological Survey digital spectral library. It shows that the first six PCs can yield the reconstruction of spectral surface reflectance with errors less than 1%. Assuming that aerosol properties can be accurately characterized, the inversion yields a retrieval of hyperspectral surface reflectance with an uncertainty of 2% (and root-mean-square error of less than 0.003), which suggests self-consistency in the inversion framework. The next step of using this framework to study the aerosol information content in GEO-TASO measurements is also discussed.

  12. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  13. Error analysis on squareness of multi-sensor integrated CMM for the multistep registration method

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Wang, Yiwen; Ye, Xiuling; Wang, Zhong; Fu, Luhua

    2018-01-01

    The multistep registration(MSR) method in [1] is to register two different classes of sensors deployed on z-arm of CMM(coordinate measuring machine): a video camera and a tactile probe sensor. In general, it is difficult to obtain a very precise registration result with a single common standard, instead, this method is achieved by measuring two different standards with a constant distance between them two which are fixed on a steel plate. Although many factors have been considered such as the measuring ability of sensors, the uncertainty of the machine and the number of data pairs, there is no exact analysis on the squareness between the x-axis and the y-axis on the xy plane. For this sake, error analysis on the squareness of multi-sensor integrated CMM for the multistep registration method will be made to examine the validation of the MSR method. Synthetic experiments on the squareness on the xy plane for the simplified MSR with an inclination rotation are simulated, which will lead to a regular result. Experiments have been carried out with the multi-standard device designed also in [1], meanwhile, inspections with the help of a laser interferometer on the xy plane have been carried out. The final results are conformed to the simulations, and the squareness errors of the MSR method are also similar to the results of interferometer. In other word, the MSR can also adopted/utilized to verify the squareness of a CMM.

  14. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  15. Integrated modeling environment for systems-level performance analysis of the Next-Generation Space Telescope

    NASA Astrophysics Data System (ADS)

    Mosier, Gary E.; Femiano, Michael; Ha, Kong; Bely, Pierre Y.; Burg, Richard; Redding, David C.; Kissil, Andrew; Rakoczy, John; Craig, Larry

    1998-08-01

    All current concepts for the NGST are innovative designs which present unique systems-level challenges. The goals are to outperform existing observatories at a fraction of the current price/performance ratio. Standard practices for developing systems error budgets, such as the 'root-sum-of- squares' error tree, are insufficient for designs of this complexity. Simulation and optimization are the tools needed for this project; in particular tools that integrate controls, optics, thermal and structural analysis, and design optimization. This paper describes such an environment which allows sub-system performance specifications to be analyzed parametrically, and includes optimizing metrics that capture the science requirements. The resulting systems-level design trades are greatly facilitated, and significant cost savings can be realized. This modeling environment, built around a tightly integrated combination of commercial off-the-shelf and in-house- developed codes, provides the foundation for linear and non- linear analysis on both the time and frequency-domains, statistical analysis, and design optimization. It features an interactive user interface and integrated graphics that allow highly-effective, real-time work to be done by multidisciplinary design teams. For the NGST, it has been applied to issues such as pointing control, dynamic isolation of spacecraft disturbances, wavefront sensing and control, on-orbit thermal stability of the optics, and development of systems-level error budgets. In this paper, results are presented from parametric trade studies that assess requirements for pointing control, structural dynamics, reaction wheel dynamic disturbances, and vibration isolation. These studies attempt to define requirements bounds such that the resulting design is optimized at the systems level, without attempting to optimize each subsystem individually. The performance metrics are defined in terms of image quality, specifically centroiding error and RMS wavefront error, which directly links to science requirements.

  16. Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.

    ERIC Educational Resources Information Center

    Poole, Keith T.

    1990-01-01

    A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…

  17. Attenuation of the Squared Canonical Correlation Coefficient under Varying Estimates of Score Reliability

    ERIC Educational Resources Information Center

    Wilson, Celia M.

    2010-01-01

    Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…

  18. Particle drag history in a subcritical post-shock flow - data analysis method and uncertainty

    NASA Astrophysics Data System (ADS)

    Ding, Liuyang; Bordoloi, Ankur; Adrian, Ronald; Prestridge, Kathy; Arizona State University Team; Los Alamos National Laboratory Team

    2017-11-01

    A novel data analysis method for measuring particle drag in an 8-pulse particle tracking velocimetry-accelerometry (PTVA) experiment is described. We represented the particle drag history, CD(t) , using polynomials up to the third order. An analytical model for continuous particle position history was derived by integrating an equation relating CD(t) with particle velocity and acceleration. The coefficients of CD(t) were then calculated by fitting the position history model to eight measured particle locations in the sense of least squares. A preliminary test with experimental data showed that the new method yielded physically more reasonable particle velocity and acceleration history compared to conventionally adopted polynomial fitting. To fully assess and optimize the performance of the new method, we performed a PTVA simulation by assuming a ground truth of particle motion based on an ensemble of experimental data. The results indicated a significant reduction in the RMS error of CD. We also found that for particle locating noise between 0.1 and 3 pixels, a range encountered in our experiment, the lowest RMS error was achieved by using the quadratic CD(t) model. Furthermore, we will also discuss the optimization of the pulse timing configuration.

  19. Sound-field reproduction in-room using optimal control techniques: simulations in the frequency domain.

    PubMed

    Gauthier, Philippe-Aubert; Berry, Alain; Woszczyk, Wieslaw

    2005-02-01

    This paper describes the simulations and results obtained when applying optimal control to progressive sound-field reproduction (mainly for audio applications) over an area using multiple monopole loudspeakers. The model simulates a reproduction system that operates either in free field or in a closed space approaching a typical listening room, and is based on optimal control in the frequency domain. This rather simple approach is chosen for the purpose of physical investigation, especially in terms of sensing microphones and reproduction loudspeakers configurations. Other issues of interest concern the comparison with wave-field synthesis and the control mechanisms. The results suggest that in-room reproduction of sound field using active control can be achieved with a residual normalized squared error significantly lower than open-loop wave-field synthesis in the same situation. Active reproduction techniques have the advantage of automatically compensating for the room's natural dynamics. For the considered cases, the simulations show that optimal control results are not sensitive (in terms of reproduction error) to wall absorption in the reproduction room. A special surrounding configuration of sensors is introduced for a sensor-free listening area in free field.

  20. Joint retrievals of cloud and drizzle in marine boundary layer clouds using ground-based radar, lidar and zenith radiances

    DOE PAGES

    Fielding, M. D.; Chiu, J. C.; Hogan, R. J.; ...

    2015-02-16

    Active remote sensing of marine boundary-layer clouds is challenging as drizzle drops often dominate the observed radar reflectivity. We present a new method to simultaneously retrieve cloud and drizzle vertical profiles in drizzling boundary-layer cloud using surface-based observations of radar reflectivity, lidar attenuated backscatter, and zenith radiances. Specifically, the vertical structure of droplet size and water content of both cloud and drizzle is characterised throughout the cloud. An ensemble optimal estimation approach provides full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from large-eddy simulation snapshots of cumulusmore » under stratocumulus, where cloud water path is retrieved with an error of 31 g m −2. The method also performs well in non-drizzling clouds where no assumption of the cloud profile is required. We then apply the method to observations of marine stratocumulus obtained during the Atmospheric Radiation Measurement MAGIC deployment in the northeast Pacific. Here, retrieved cloud water path agrees well with independent 3-channel microwave radiometer retrievals, with a root mean square difference of 10–20 g m −2.« less

  1. Common-Path Wavefront Sensing for Advanced Coronagraphs

    NASA Technical Reports Server (NTRS)

    Wallace, J. Kent; Serabyn, Eugene; Mawet, Dimitri

    2012-01-01

    Imaging of faint companions around nearby stars is not limited by either intrinsic resolution of a coronagraph/telescope system, nor is it strictly photon limited. Typically, it is both the magnitude and temporal variation of small phase and amplitude errors imparted to the electric field by elements in the optical system which will limit ultimate performance. Adaptive optics systems, particularly those with multiple deformable mirrors, can remove these errors, but they need to be sensed in the final image plane. If the sensing system is before the final image plane, which is typical for most systems, then the non-common path optics between the wavefront sensor and science image plane will lead to un-sensed errors. However, a new generation of high-performance coronagraphs naturally lend themselves to wavefront sensing in the final image plane. These coronagraphs and the wavefront sensing will be discussed, as well as plans for demonstrating this with a high-contrast system on the ground. Such a system will be a key system-level proof for a future space-based coronagraph mission, which will also be discussed.

  2. Some Insights of Spectral Optimization in Ocean Color Inversion

    NASA Technical Reports Server (NTRS)

    Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert

    2011-01-01

    In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.

  3. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1985-01-01

    The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)

  4. Failure detection and fault management techniques for flush airdata sensing systems

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.; Leondes, Cornelius T.

    1992-01-01

    Methods based on chi-squared analysis are presented for detecting system and individual-port failures in the high-angle-of-attack flush airdata sensing system on the NASA F-18 High Alpha Research Vehicle. The HI-FADS hardware is introduced, and the aerodynamic model describes measured pressure in terms of dynamic pressure, angle of attack, angle of sideslip, and static pressure. Chi-squared analysis is described in the presentation of the concept for failure detection and fault management which includes nominal, iteration, and fault-management modes. A matrix of pressure orifices arranged in concentric circles on the nose of the aircraft indicate the parameters which are applied to the regression algorithms. The sensing techniques are applied to the F-18 flight data, and two examples are given of the computed angle-of-attack time histories. The failure-detection and fault-management techniques permit the matrix to be multiply redundant, and the chi-squared analysis is shown to be useful in the detection of failures.

  5. Leaf aging of Amazonian canopy trees as revealed by spectral and physiochemical measurements.

    PubMed

    Chavana-Bryant, Cecilia; Malhi, Yadvinder; Wu, Jin; Asner, Gregory P; Anastasiou, Athanasios; Enquist, Brian J; Cosio Caravasi, Eric G; Doughty, Christopher E; Saleska, Scott R; Martin, Roberta E; Gerard, France F

    2017-05-01

    Leaf aging is a fundamental driver of changes in leaf traits, thereby regulating ecosystem processes and remotely sensed canopy dynamics. We explore leaf reflectance as a tool to monitor leaf age and develop a spectra-based partial least squares regression (PLSR) model to predict age using data from a phenological study of 1099 leaves from 12 lowland Amazonian canopy trees in southern Peru. Results demonstrated monotonic decreases in leaf water (LWC) and phosphorus (P mass ) contents and an increase in leaf mass per unit area (LMA) with age across trees; leaf nitrogen (N mass ) and carbon (C mass ) contents showed monotonic but tree-specific age responses. We observed large age-related variation in leaf spectra across trees. A spectra-based model was more accurate in predicting leaf age (R 2  = 0.86; percent root mean square error (%RMSE) = 33) compared with trait-based models using single (R 2  = 0.07-0.73; %RMSE = 7-38) and multiple (R 2  = 0.76; %RMSE = 28) predictors. Spectra- and trait-based models established a physiochemical basis for the spectral age model. Vegetation indices (VIs) including the normalized difference vegetation index (NDVI), enhanced vegetation index 2 (EVI2), normalized difference water index (NDWI) and photosynthetic reflectance index (PRI) were all age-dependent. This study highlights the importance of leaf age as a mediator of leaf traits, provides evidence of age-related leaf reflectance changes that have important impacts on VIs used to monitor canopy dynamics and productivity and proposes a new approach to predicting and monitoring leaf age with important implications for remote sensing. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  6. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms.

    PubMed

    Tang, Jie; Nett, Brian E; Chen, Guang-Hong

    2009-10-07

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  7. Modelling of the batch biosorption system: study on exchange of protons with cell wall-bound mineral ions.

    PubMed

    Mishra, Vishal

    2015-01-01

    The interchange of the protons with the cell wall-bound calcium and magnesium ions at the interface of solution/bacterial cell surface in the biosorption system at various concentrations of protons has been studied in the present work. A mathematical model for establishing the correlation between concentration of protons and active sites was developed and optimized. The sporadic limited residence time reactor was used to titrate the calcium and magnesium ions at the individual data point. The accuracy of the proposed mathematical model was estimated using error functions such as nonlinear regression, adjusted nonlinear regression coefficient, the chi-square test, P-test and F-test. The values of the chi-square test (0.042-0.017), P-test (<0.001-0.04), sum of square errors (0.061-0.016), root mean square error (0.01-0.04) and F-test (2.22-19.92) reported in the present research indicated the suitability of the model over a wide range of proton concentrations. The zeta potential of the bacterium surface at various concentrations of protons was observed to validate the denaturation of active sites.

  8. Simple Forest Canopy Thermal Exitance Model

    NASA Technical Reports Server (NTRS)

    Smith J. A.; Goltz, S. M.

    1999-01-01

    We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.

  9. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  10. Feature Orientation and Positional Accuracy Assessment of Digital Orthophoto and Line Map for Large Scale Mapping: the Case Study on Bahir Dar Town, Ethiopia

    NASA Astrophysics Data System (ADS)

    Sisay, Z. G.; Besha, T.; Gessesse, B.

    2017-05-01

    This study used in-situ GPS data to validate the accuracy of horizontal coordinates and orientation of linear features of orthophoto and line map for Bahir Dar city. GPS data is processed using GAMIT/GLOBK and Lieca GeoOfice (LGO) in a least square sense with a tie to local and regional GPS reference stations to predict horizontal coordinates at five checkpoints. Real-Time-Kinematic GPS measurement technique is used to collect the coordinates of road centerline to test the accuracy associated with the orientation of the photogrammetric line map. The accuracy of orthophoto was evaluated by comparing with in-situ GPS coordinates and it is in a good agreement with a root mean square error (RMSE) of 12.45 cm in x- and 13.97 cm in y-coordinates, on the other hand, 6.06 cm with 95 % confidence level - GPS coordinates from GAMIT/GLOBK. Whereas, the horizontal coordinates of the orthophoto are in agreement with in-situ GPS coordinates at an accuracy of 16.71 cm and 18.98 cm in x and y-directions respectively and 11.07 cm with 95 % confidence level - GPS data is processed by LGO and a tie to local GPS network. Similarly, the accuracy of linear feature is in a good fit with in-situ GPS measurement. The GPS coordinates of the road centerline deviates from the corresponding coordinates of line map by a mean value of 9.18 cm in x- direction and -14.96 cm in y-direction. Therefore, it can be concluded that, the accuracy of the orthophoto and line map is within the national standard of error budget ( 25 cm).

  11. Direct estimation of tracer-kinetic parameter maps from highly undersampled brain dynamic contrast enhanced MRI.

    PubMed

    Guo, Yi; Lingala, Sajan Goud; Zhu, Yinghua; Lebel, R Marc; Nayak, Krishna S

    2017-10-01

    The purpose of this work was to develop and evaluate a T 1 -weighted dynamic contrast enhanced (DCE) MRI methodology where tracer-kinetic (TK) parameter maps are directly estimated from undersampled (k,t)-space data. The proposed reconstruction involves solving a nonlinear least squares optimization problem that includes explicit use of a full forward model to convert parameter maps to (k,t)-space, utilizing the Patlak TK model. The proposed scheme is compared against an indirect method that creates intermediate images by parallel imaging and compressed sensing before to TK modeling. Thirteen fully sampled brain tumor DCE-MRI scans with 5-second temporal resolution are retrospectively undersampled at rates R = 20, 40, 60, 80, and 100 for each dynamic frame. TK maps are quantitatively compared based on root mean-squared-error (rMSE) and Bland-Altman analysis. The approach is also applied to four prospectively R = 30 undersampled whole-brain DCE-MRI data sets. In the retrospective study, the proposed method performed statistically better than indirect method at R ≥ 80 for all 13 cases. This approach provided restoration of TK parameter values with less errors in tumor regions of interest, an improvement compared to a state-of-the-art indirect method. Applied prospectively, the proposed method provided whole-brain, high-resolution TK maps with good image quality. Model-based direct estimation of TK maps from k,t-space DCE-MRI data is feasible and is compatible up to 100-fold undersampling. Magn Reson Med 78:1566-1578, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. Differential Geometry Applied To Least-Square Error Surface Approximations

    NASA Astrophysics Data System (ADS)

    Bolle, Ruud M.; Sabbah, Daniel

    1987-08-01

    This paper focuses on extraction of the parameters of individual surfaces from noisy depth maps. The basis for this are least-square error polynomial approximations to the range data and the curvature properties that can be computed from these approximations. The curvature properties are derived using the invariants of the Weingarten Map evaluated at the origin of local coordinate systems centered at the range points. The Weingarten Map is a well-known concept in differential geometry; a brief treatment of the differential geometry pertinent to surface curvature is given. We use the curvature properties for extracting certain surface parameters from the curvature properties of the approximations. Then we show that curvature properties alone are not enough to obtain all the parameters of the surfaces; higher order properties (information about change of curvature) are needed to obtain full parametric descriptions. This surface parameter estimation problem arises in the design of a vision system to recognize 3D objects whose surfaces are composed of planar patches and patches of quadrics of revolution. (Quadrics of revolution are quadrics that are surfaces of revolution.) A significant portion of man-made objects can be modeled using these surfaces. The actual process of recognition and parameter extraction is framed as a set of stacked parameter space transforms. The transforms are "stacked" in the sense that any one transform computes only a partial geometric description that forms the input to the next transform. Those who are interested in the organization and control of the recognition and parameter recognition process are referred to [Sabbah86], this paper briefly touches upon the organization, but concentrates mainly on geometrical aspects of the parameter extraction.

  13. Patient safety education to change medical students' attitudes and sense of responsibility.

    PubMed

    Roh, Hyerin; Park, Seok Ju; Kim, Taekjoong

    2015-01-01

    This study examined changes in the perceptions and attitudes as well as the sense of individual and collective responsibility in medical students after they received patient safety education. A three-day patient safety curriculum was implemented for third-year medical students shortly before entering their clerkship. Before and after training, we administered a questionnaire, which was analysed quantitatively. Additionally, we asked students to answer questions about their expected behaviours in response to two case vignettes. Their answers were analysed qualitatively. There was improvement in students' concepts of patient safety after training. Before training, they showed good comprehension of the inevitability of error, but most students blamed individuals for errors and expressed a strong sense of individual responsibility. After training, students increasingly attributed errors to system dysfunction and reported more self-confidence in speaking up about colleagues' errors. However, due to the hierarchical culture, students still described difficulties communicating with senior doctors. Patient safety education effectively shifted students' attitudes towards systems-based thinking and increased their sense of collective responsibility. Strategies for improving superior-subordinate communication within a hierarchical culture should be added to the patient safety curriculum.

  14. Measuring Dispersion Effects of Factors in Factorial Experiments.

    DTIC Science & Technology

    1988-01-01

    error is MSE =i=l j=1 i n r (SSE/(N-p)), the sum of squares of pure error is SSPE = Z E Y i=1 j=1 and the mean square of pure error is MSPE - ( SSPE /n...the level of the factor in the ith run is 0. 3.1. First Measure We have n r n r SSPE = 1 Is it -yi) 2 + E r (1-8 )(yjj li-l j=l (iYjj +i= j=l l - i...The first component in SSPE corresponds to level I of the factor and has n degrees of freedom ( E 6i)(r-I). The second component corresponds to i=l n

  15. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  16. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  17. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  18. Automatic weld torch guidance control system

    NASA Technical Reports Server (NTRS)

    Smaith, H. E.; Wall, W. A.; Burns, M. R., Jr.

    1982-01-01

    A highly reliable, fully digital, closed circuit television optical, type automatic weld seam tracking control system was developed. This automatic tracking equipment is used to reduce weld tooling costs and increase overall automatic welding reliability. The system utilizes a charge injection device digital camera which as 60,512 inidividual pixels as the light sensing elements. Through conventional scanning means, each pixel in the focal plane is sequentially scanned, the light level signal digitized, and an 8-bit word transmitted to scratch pad memory. From memory, the microprocessor performs an analysis of the digital signal and computes the tracking error. Lastly, the corrective signal is transmitted to a cross seam actuator digital drive motor controller to complete the closed loop, feedback, tracking system. This weld seam tracking control system is capable of a tracking accuracy of + or - 0.2 mm, or better. As configured, the system is applicable to square butt, V-groove, and lap joint weldments.

  19. Interval Predictor Models for Data with Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Lacerda, Marcio J.; Crespo, Luis G.

    2017-01-01

    An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.

  20. Fitting aerodynamic forces in the Laplace domain: An application of a nonlinear nongradient technique to multilevel constrained optimization

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.; Adams, W. M., Jr.

    1984-01-01

    A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.

  1. Theoretical and experimental investigations of sensor location for optimal aeroelastic system state estimation

    NASA Technical Reports Server (NTRS)

    Liu, G.

    1985-01-01

    One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.

  2. Performance Evaluation of a Biometric System Based on Acoustic Images

    PubMed Central

    Izquierdo-Fuente, Alberto; del Val, Lara; Jiménez, María I.; Villacorta, Juan J.

    2011-01-01

    An acoustic electronic scanning array for acquiring images from a person using a biometric application is developed. Based on pulse-echo techniques, multifrequency acoustic images are obtained for a set of positions of a person (front, front with arms outstretched, back and side). Two Uniform Linear Arrays (ULA) with 15 λ/2-equispaced sensors have been employed, using different spatial apertures in order to reduce sidelobe levels. Working frequencies have been designed on the basis of the main lobe width, the grating lobe levels and the frequency responses of people and sensors. For a case-study with 10 people, the acoustic profiles, formed by all images acquired, are evaluated and compared in a mean square error sense. Finally, system performance, using False Match Rate (FMR)/False Non-Match Rate (FNMR) parameters and the Receiver Operating Characteristic (ROC) curve, is evaluated. On the basis of the obtained results, this system could be used for biometric applications. PMID:22163708

  3. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  4. Super-linear Precision in Simple Neural Population Codes

    NASA Astrophysics Data System (ADS)

    Schwab, David; Fiete, Ila

    2015-03-01

    A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.

  5. Effect of different head-neck-jaw postures on cervicocephalic kinesthetic sense.

    PubMed

    Zafar, H; Alghadir, A H; Iqbal, Z A

    2017-12-01

    To investigate the effect of different induced head-neck-jaw postures on head-neck relocation error among healthy subjects. 30 healthy adult male subjects participated in this study. Cervicocephalic kinesthetic sense was measured while standing, habitual sitting, habitual sitting with clenched jaw and habitual sitting with forward head posture during right rotation, left rotation, flexion and extension using kinesthetic sensibility test. Head-neck relocation error was least while standing, followed by habitual sitting, habitual sitting with forward head posture and habitual sitting with jaw clenched. However, there was no significant difference in error between different tested postures during all the movements. To the best of our knowledge, this is the first study to see the effect of different induced head-neck-jaw postures on head-neck position sense among healthy subjects. Assuming a posture for a short duration of time doesn't affect head-neck relocation error in normal healthy subjects.

  6. Dual-mass vibratory rate gyroscope with suppressed translational acceleration response and quadrature-error correction capability

    NASA Technical Reports Server (NTRS)

    Clark, William A. (Inventor); Juneau, Thor N. (Inventor); Lemkin, Mark A. (Inventor); Roessig, Allen W. (Inventor)

    2001-01-01

    A microfabricated vibratory rate gyroscope to measure rotation includes two proof-masses mounted in a suspension system anchored to a substrate. The suspension has two principal modes of compliance, one of which is driven into oscillation. The driven oscillation combined with rotation of the substrate about an axis perpendicular to the substrate results in Coriolis acceleration along the other mode of compliance, the sense-mode. The sense-mode is designed to respond to Coriolis accelerationwhile suppressing the response to translational acceleration. This is accomplished using one or more rigid levers connecting the two proof-masses. The lever allows the proof-masses to move in opposite directions in response to Coriolis acceleration. The invention includes a means for canceling errors, termed quadrature error, due to imperfections in implementation of the sensor. Quadrature-error cancellation utilizes electrostatic forces to cancel out undesired sense-axis motion in phase with drive-mode position.

  7. Geospatial distribution modeling and determining suitability of groundwater quality for irrigation purpose using geospatial methods and water quality index (WQI) in Northern Ethiopia

    NASA Astrophysics Data System (ADS)

    Gidey, Amanuel

    2018-06-01

    Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.

  8. Methods for estimating the magnitude and frequency of peak streamflows at ungaged sites in and near the Oklahoma Panhandle

    USGS Publications Warehouse

    Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.

    2015-09-28

    Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.

  9. Capacitive touch sensing : signal and image processing algorithms

    NASA Astrophysics Data System (ADS)

    Baharav, Zachi; Kakarala, Ramakrishna

    2011-03-01

    Capacitive touch sensors have been in use for many years, and recently gained center stage with the ubiquitous use in smart-phones. In this work we will analyze the most common method of projected capacitive sensing, that of absolute capacitive sensing, together with the most common sensing pattern, that of diamond-shaped sensors. After a brief introduction to the problem, and the reasons behind its popularity, we will formulate the problem as a reconstruction from projections. We derive analytic solutions for two simple cases: circular finger on a wire grid, and square finger on a square grid. The solutions give insight into the ambiguities of finding finger location from sensor readings. The main contribution of our paper is the discussion of interpolation algorithms including simple linear interpolation , curve fitting (parabolic and Gaussian), filtering, general look-up-table, and combinations thereof. We conclude with observations on the limits of the present algorithmic methods, and point to possible future research.

  10. Self-Assembled Molecular Squares Containing Metal-Based Donor: Synthesis and Application in the Sensing of Nitro-aromatics†

    PubMed Central

    Vajpayee, Vaishali; Kim, Hyunuk; Mishra, Anurag; Mukherjee, Partha Sarathi; Lee, Min Hyung; Kim, Hwan Kyu

    2012-01-01

    Self-assemblies between a linear Pt-based donor and ferrocene-chelated metallic acceptors produce novel heterometallic squares 4 and 5, which show fluorescence quenching upon addition of nitro-aromatics. PMID:21321785

  11. [NIR Assignment of Magnolol by 2D-COS Technology and Model Application Huoxiangzhengqi Oral Liduid].

    PubMed

    Pei, Yan-ling; Wu, Zhi-sheng; Shi, Xin-yuan; Pan, Xiao-ning; Peng, Yan-fang; Qiao, Yan-jiang

    2015-08-01

    Near infrared (NIR) spectroscopy assignment of Magnolol was performed using deuterated chloroform solvent and two-dimensional correlation spectroscopy (2D-COS) technology. According to the synchronous spectra of deuterated chloroform solvent and Magnolol, 1365~1455, 1600~1720, 2000~2181 and 2275~2465 nm were the characteristic absorption of Magnolol. Connected with the structure of Magnolol, 1440 nm was the stretching vibration of phenolic group O-H, 1679 nm was the stretching vibration of aryl and methyl which connected with aryl, 2117, 2304, 2339 and 2370 nm were the combination of the stretching vibration, bending vibration and deformation vibration for aryl C-H, 2445 nm were the bending vibration of methyl which linked with aryl group, these bands attribut to the characteristics of Magnolol. Huoxiangzhengqi Oral Liduid was adopted to study the Magnolol, the characteristic band by spectral assignment and the band by interval Partial Least Squares (iPLS) and Synergy interval Partial Least Squares (SiPLS) were used to establish Partial Least Squares (PLS) quantitative model, the coefficient of determination Rcal(2) and Rpre(2) were greater than 0.99, the Root Mean of Square Error of Calibration (RM-SEC), Root Mean of Square Error of Cross Validation (RMSECV) and Root Mean of Square Error of Prediction (RMSEP) were very small. It indicated that the characteristic band by spectral assignment has the same results with the Chemometrics in PLS model. It provided a reference for NIR spectral assignment of chemical compositions in Chinese Materia Medica, and the band filters of NIR were interpreted.

  12. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  13. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: part II--Optimization of structural sensor placement.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-04-01

    The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.

  14. Sex differences in the shoulder joint position sense acuity: a cross-sectional study.

    PubMed

    Vafadar, Amir K; Côté, Julie N; Archambault, Philippe S

    2015-09-30

    Work-related musculoskeletal disorders (WMSD) is the most expensive form of work disability. Female sex has been considered as an individual risk factor for the development of WMSD, specifically in the neck and shoulder region. One of the factors that might contribute to the higher injury rate in women is possible differences in neuromuscular control. Accordingly the purpose of this study was to estimate the effect of sex on shoulder joint position sense acuity (as a part of shoulder neuromuscular control) in healthy individuals. Twenty-eight healthy participants, 14 females and 14 males were recruited for this study. To test position sense acuity, subjects were asked to flex their dominant shoulder to one of the three pre-defined angle ranges (low, mid and high-ranges) with eyes closed, hold their arm in that position for three seconds, go back to the starting position and then immediately replicate the same joint flexion angle, while the difference between the reproduced and original angle was taken as the measure of position sense error. The errors were measured using Vicon motion capture system. Subjects reproduced nine positions in total (3 ranges × 3 trials each). Calculation of absolute repositioning error (magnitude of error) showed no significant difference between men and women (p-value ≥ 0.05). However, the analysis of the direction of error (constant error) showed a significant difference between the sexes, as women tended to mostly overestimate the target, whereas men tended to both overestimate and underestimate the target (p-value ≤ 0.01, observed power = 0.79). The results also showed that men had a significantly more variable error, indicating more variability in their position sense, compared to women (p-value ≤ 0.05, observed power = 0.78). Differences observed in the constant JPS error suggest that men and women might use different neuromuscular control strategies in the upper limb. In addition, higher JPS variability observed in men might be one of the factors that could contribute to their lower rate of musculoskeletal disorders, compared to women. The result of this study showed that shoulder position sense, as part of the neuromuscular control system, differs between men and women. This finding can help us better understand the reasons behind the higher rate of musculoskeletal disorders in women, especially in the working environments.

  15. Evaluation of joint position sense measured by inversion angle replication error in patients with an osteochondral lesion of the talus.

    PubMed

    Nakasa, Tomoyuki; Adachi, Nobuo; Shibuya, Hayatoshi; Okuhara, Atsushi; Ochi, Mitsuo

    2013-01-01

    The etiology of the osteochondral lesion of the talar dome (OLT) remains unclear. A joint position sense deficit of the ankle is reported to be a possible cause of ankle disorder. Repeated contact of the articular surface of the talar dome with the plafond during inversion might be a cause of OLT. The aim of the present study was to evaluate the joint position sense deficit by measuring the replication error of the inversion angle in patients with OLT. The replication error, which is the difference between the index angle and replication angle in inversion, was measured in 15 patients with OLT. The replication error in 15 healthy volunteers was evaluated as a control group. The side to side differences of the replication errors between the patients with OLT and healthy volunteers and the replication errors in each angle between the involved and uninvolved ankle in the patients with OLT were investigated. Finally, the side to side differences of the replication errors between the patients with OLT with a traumatic and nontraumatic history were compared. The side to side difference in the patients with OLT (1.3° ± 0.2°) was significantly greater than that in the healthy subjects (0.4° ± 0.7°) (p ≤ .05). Significant differences were found between the involved and uninvolved sides at 10°, 15°, 20°, and 25° in the patients with OLT. No significant difference (p > .05) was found between the patients with traumatic and nontraumatic OLT. The present study found that the patients with OLT have a joint position sense deficit during inversion movement, regardless of a traumatic history. Although various factors for the etiology of OLT have been reported, the joint position sense deficit in inversion might be a cause of OLT. Copyright © 2013 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Intelligent sensing sensory quality of Chinese rice wine using near infrared spectroscopy and nonlinear tools.

    PubMed

    Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen

    2016-02-05

    The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp=0.9180 and RMSEP=2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. An empirical model for estimating solar radiation in the Algerian Sahara

    NASA Astrophysics Data System (ADS)

    Benatiallah, Djelloul; Benatiallah, Ali; Bouchouicha, Kada; Hamouda, Messaoud; Nasri, Bahous

    2018-05-01

    The present work aims to determine the empirical model R.sun that will allow us to evaluate the solar radiation flues on a horizontal plane and in clear-sky on the located Adrar city (27°18 N and 0°11 W) of Algeria and compare with the results measured at the localized site. The expected results of this comparison are of importance for the investment study of solar systems (solar power plants for electricity production, CSP) and also for the design and performance analysis of any system using the solar energy. Statistical indicators used to evaluate the accuracy of the model where the mean bias error (MBE), root mean square error (RMSE) and coefficient of determination. The results show that for global radiation, the daily correlation coefficient is 0.9984. The mean absolute percentage error is 9.44 %. The daily mean bias error is -7.94 %. The daily root mean square error is 12.31 %.

  18. Response Surface Modeling Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2001-01-01

    A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.

  19. Effective Algorithm for Detection and Correction of the Wave Reconstruction Errors Caused by the Tilt of Reference Wave in Phase-shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying

    2010-04-01

    In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.

  20. Validation of Core Temperature Estimation Algorithm

    DTIC Science & Technology

    2016-01-29

    plot of observed versus estimated core temperature with the line of identity (dashed) and the least squares regression line (solid) and line equation...estimated PSI with the line of identity (dashed) and the least squares regression line (solid) and line equation in the top left corner. (b) Bland...for comparison. The root mean squared error (RMSE) was also computed, as given by Equation 2.

  1. Determination and error analysis of emittance and spectral emittance measurements by remote sensing

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Kumar, R.

    1977-01-01

    The author has identified the following significant results. From the theory of remote sensing of surface temperatures, an equation of the upper bound of absolute error of emittance was determined. It showed that the absolute error decreased with an increase in contact temperature, whereas, it increased with an increase in environmental integrated radiant flux density. Change in emittance had little effect on the absolute error. A plot of the difference between temperature and band radiance temperature vs. emittance was provided for the wavelength intervals: 4.5 to 5.5 microns, 8 to 13.5 microns, and 10.2 to 12.5 microns.

  2. Systematic Error Modeling and Bias Estimation

    PubMed Central

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  3. Equalization and detection for digital communication over nonlinear bandlimited satellite communication channels. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gutierrez, Alberto, Jr.

    1995-01-01

    This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.

  4. Photogrammetric Method and Software for Stream Planform Identification

    NASA Astrophysics Data System (ADS)

    Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.

    2013-12-01

    Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points

  5. Using Multiple Endmember Spectral Mixture Analysis of MODIS Data for Computing the Fire Potential Index in Southern California

    NASA Astrophysics Data System (ADS)

    Schneider, P.; Roberts, D. A.

    2007-12-01

    The Fire Potential Index (FPI) is currently the only operationally used wildfire susceptibility index in the United States that incorporates remote sensing data in addition to meteorological information. Its remote sensing component utilizes relative greenness derived from a NDVI time series as a proxy for computing the ratio of live to dead vegetation. This study investigates the potential of Multiple Endmember Spectral Mixture Analysis (MESMA) as a more direct and physically reasonable way of computing the live ratio and applying it for the computation of the FPI. A time series of 16-day reflectance composites of Moderate Resolution Imaging Spectroradiometer (MODIS) data was used to perform the analysis. Endmember selection for green vegetation (GV), non- photosynthetic vegetation (NPV) and soil was performed in two stages. First, a subset of suitable endmembers was selected from an extensive library of reference and image spectra for each class using Endmember Average Root Mean Square Error (EAR), Minimum Average Spectral Angle (MASA) and a count-based technique. Second, the most appropriate endmembers for the specific data set were selected from the subset by running a series of 2-endmember models on representative images and choosing the ones that modeled the majority of pixels. The final set of endmembers was used for running MESMA on southern California MODIS composites from 2000 to 2006. 3- and 4-endmember models were considered. The best model was chosen on a per-pixel basis according to the minimum root mean square error of the models at each level of complexity. Endmember fractions were normalized by the shade endmember to generate realistic fractions of GV and NPV. In order to validate the MESMA-derived GV fractions they were compared against live ratio estimates from RG. A significant spatial and temporal relationship between both measures was found, indicating that GV fraction has the potential to substitute RG in computing the FPI. To further test this hypothesis the live ratio estimates obtained from MESMA were used to compute daily FPI maps for southern California from 2001 to 2006. A validation with historical wildfire data from the MODIS Active Fire product was carried out over the same time period using logistic regression. Initial results show that MESMA-derived GV fraction can be used successfully for generating FPI maps of southern California.

  6. Finding Productive Talk around Errors in Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    Olsen, Jennifer K.; Rummel, Nikol; Aleven, Vincent

    2015-01-01

    To learn from an error, students must correct the error by engaging in sense-making activities around the error. Past work has looked at how supporting collaboration around errors affects learning. This paper attempts to shed further light on the role that collaboration can play in the process of overcoming an error. We found that good…

  7. VLSI Design of Trusted Virtual Sensors.

    PubMed

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  8. VLSI Design of Trusted Virtual Sensors

    PubMed Central

    2018-01-01

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μs. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time). PMID:29370141

  9. Modeling Forest Biomass and Growth: Coupling Long-Term Inventory and Lidar Data

    NASA Technical Reports Server (NTRS)

    Babcock, Chad; Finley, Andrew O.; Cook, Bruce D.; Weiskittel, Andrew; Woodall, Christopher W.

    2016-01-01

    Combining spatially-explicit long-term forest inventory and remotely sensed information from Light Detection and Ranging (LiDAR) datasets through statistical models can be a powerful tool for predicting and mapping above-ground biomass (AGB) at a range of geographic scales. We present and examine a novel modeling approach to improve prediction of AGB and estimate AGB growth using LiDAR data. The proposed model accommodates temporal misalignment between field measurements and remotely sensed data-a problem pervasive in such settings-by including multiple time-indexed measurements at plot locations to estimate AGB growth. We pursue a Bayesian modeling framework that allows for appropriately complex parameter associations and uncertainty propagation through to prediction. Specifically, we identify a space-varying coefficients model to predict and map AGB and its associated growth simultaneously. The proposed model is assessed using LiDAR data acquired from NASA Goddard's LiDAR, Hyper-spectral & Thermal imager and field inventory data from the Penobscot Experimental Forest in Bradley, Maine. The proposed model outperformed the time-invariant counterpart models in predictive performance as indicated by a substantial reduction in root mean squared error. The proposed model adequately accounts for temporal misalignment through the estimation of forest AGB growth and accommodates residual spatial dependence. Results from this analysis suggest that future AGB models informed using remotely sensed data, such as LiDAR, may be improved by adapting traditional modeling frameworks to account for temporal misalignment and spatial dependence using random effects.

  10. Cursor Control Device Test Battery

    NASA Technical Reports Server (NTRS)

    Holden, Kritina; Sandor, Aniko; Pace, John; Thompson, Shelby

    2013-01-01

    The test battery was developed to provide a standard procedure for cursor control device evaluation. The software was built in Visual Basic and consists of nine tasks and a main menu that integrates the set-up of the tasks. The tasks can be used individually, or in a series defined in the main menu. Task 1, the Unidirectional Pointing Task, tests the speed and accuracy of clicking on targets. Two rectangles with an adjustable width and adjustable center- to-center distance are presented. The task is to click back and forth between the two rectangles. Clicks outside of the rectangles are recorded as errors. Task 2, Multidirectional Pointing Task, measures speed and accuracy of clicking on targets approached from different angles. Twenty-five numbered squares of adjustable width are arranged around an adjustable diameter circle. The task is to point and click on the numbered squares (placed on opposite sides of the circle) in consecutive order. Clicks outside of the squares are recorded as errors. Task 3, Unidirectional (horizontal) Dragging Task, is similar to dragging a file into a folder on a computer desktop. Task 3 requires dragging a square of adjustable width from one rectangle and dropping it into another. The width of each rectangle is adjustable, as well as the distance between the two rectangles. Dropping the square outside of the rectangles is recorded as an error. Task 4, Unidirectional Path Following, is similar to Task 3. The task is to drag a square through a tunnel consisting of two lines. The size of the square and the width of the tunnel are adjustable. If the square touches any of the lines, it is counted as an error and the task is restarted. Task 5, Text Selection, involves clicking on a Start button, and then moving directly to the underlined portion of the displayed text and highlighting it. The pointing distance to the text is adjustable, as well as the to-be-selected font size and the underlined character length. If the selection does not include all of the underlined characters, or includes non-underlined characters, it is recorded as an error. Task 6, Multi-size and Multi-distance Pointing, presents the participant with 24 consecutively numbered buttons of different sizes (63 to 163 pixels), and at different distances (60 to 80 pixels) from the Start button. The task is to click on the Start button, and then move directly to, and click on, each numbered target button in consecutive order. Clicks outside of the target area are errors. Task 7, Standard Interface Elements Task, involves interacting with standard interface elements as instructed in written procedures, including: drop-down menus, sliders, text boxes, radio buttons, and check boxes. Task completion time is recorded. In Task 8, a circular track is presented with a disc in it at the top. Track width and disc size are adjustable. The task is to move the disc with circular motion within the path without touching the boundaries of the track. Time and errors are recorded. Task 9 is a discrete task that allows evaluation of discrete cursor control devices that tab from target to target, such as a castle switch. The task is to follow a predefined path and to click on the yellow targets along the path.

  11. A Bayesian approach to parameter and reliability estimation in the Poisson distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.

  12. Insights into the Earth System mass variability from CSR-RL05 GRACE gravity fields

    NASA Astrophysics Data System (ADS)

    Bettadpur, S.

    2012-04-01

    The next-generation Release-05 GRACE gravity field data products are the result of extensive effort applied to the improvements to the GRACE Level-1 (tracking) data products, and to improvements in the background gravity models and processing methodology. As a result, the squared-error upper-bound in RL05 fields is half or less than the squared-error upper-bound in RL04 fields. The CSR-RL05 field release consists of unconstrained gravity fields as well as a regularized gravity field time-series that can be used for several applications without any post-processing error reduction. This paper will describe the background and the nature of these improvements in the data products, and provide an error characterization. We will describe the insights these new series offer in measuring the mass flux due to diverse Hydrologic, Oceanographic and Cryospheric processes.

  13. Retrieval of the aerosol optical thickness from UV global irradiance measurements

    NASA Astrophysics Data System (ADS)

    Costa, M. J.; Salgueiro, V.; Bortoli, D.; Obregón, M. A.; Antón, M.; Silva, A. M.

    2015-12-01

    The UV irradiance is measured at Évora since several years, where a CIMEL sunphotometer integrated in AERONET is also installed. In the present work, measurements of UVA (315 - 400 nm) irradiances taken with Kipp&Zonen radiometers, as well as satellite data of ozone total column values, are used in combination with radiative transfer calculations, to estimate the aerosol optical thickness (AOT) in the UV. The retrieved UV AOT in Évora is compared with AERONET AOT (at 340 and 380 nm) and a fairly good agreement is found with a root mean square error of 0.05 (normalized root mean square error of 8.3%) and a mean absolute error of 0.04 (mean percentage error of 2.9%). The methodology is then used to estimate the UV AOT in Sines, an industrialized site on the Atlantic western coast, where the UV irradiance is monitored since 2013 but no aerosol information is available.

  14. Uncertainty based pressure reconstruction from velocity measurement with generalized least squares

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos

    2017-11-01

    A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.

  15. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  16. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  17. Proposing a new iterative learning control algorithm based on a non-linear least square formulation - Minimising draw-in errors

    NASA Astrophysics Data System (ADS)

    Endelt, B.

    2017-09-01

    Forming operation are subject to external disturbances and changing operating conditions e.g. new material batch, increasing tool temperature due to plastic work, material properties and lubrication is sensitive to tool temperature. It is generally accepted that forming operations are not stable over time and it is not uncommon to adjust the process parameters during the first half hour production, indicating that process instability is gradually developing over time. Thus, in-process feedback control scheme might not-be necessary to stabilize the process and an alternative approach is to apply an iterative learning algorithm, which can learn from previously produced parts i.e. a self learning system which gradually reduces error based on historical process information. What is proposed in the paper is a simple algorithm which can be applied to a wide range of sheet-metal forming processes. The input to the algorithm is the final flange edge geometry and the basic idea is to reduce the least-square error between the current flange geometry and a reference geometry using a non-linear least square algorithm. The ILC scheme is applied to a square deep-drawing and the Numisheet’08 S-rail benchmark problem, the numerical tests shows that the proposed control scheme is able control and stabilise both processes.

  18. Online measurement of urea concentration in spent dialysate during hemodialysis.

    PubMed

    Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J

    2004-01-01

    We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.

  19. Climatological Modeling of Monthly Air Temperature and Precipitation in Egypt through GIS Techniques

    NASA Astrophysics Data System (ADS)

    El Kenawy, A.

    2009-09-01

    This paper describes a method for modeling and mapping four climatic variables (maximum temperature, minimum temperature, mean temperature and total precipitation) in Egypt using a multiple regression approach implemented in a GIS environment. In this model, a set of variables including latitude, longitude, elevation within a distance of 5, 10 and 15 km, slope, aspect, distance to the Mediterranean Sea, distance to the Red Sea, distance to the Nile, ratio between land and water masses within a radius of 5, 10, 15 km, the Normalized Difference Vegetation Index (NDVI), the Normalized Difference Water Index (NDWI), the Normalized Difference Temperature Index (NDTI) and reflectance are included as independent variables. These variables were integrated as raster layers in MiraMon software at a spatial resolution of 1 km. Climatic variables were considered as dependent variables and averaged from quality controlled and homogenized 39 series distributing across the entire country during the period of (1957-2006). For each climatic variable, digital and objective maps were finally obtained using the multiple regression coefficients at monthly, seasonal and annual timescale. The accuracy of these maps were assessed through cross-validation between predicted and observed values using a set of statistics including coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), mean bias Error (MBE) and D Willmott statistic. These maps are valuable in the sense of spatial resolution as well as the number of observatories involved in the current analysis.

  20. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  1. Assessment of Satellite Precipitation Products in the Philippine Archipelago

    NASA Astrophysics Data System (ADS)

    Ramos, M. D.; Tendencia, E.; Espana, K.; Sabido, J.; Bagtasa, G.

    2016-06-01

    Precipitation is the most important weather parameter in the Philippines. Made up of more than 7100 islands, the Philippine archipelago is an agricultural country that depends on rain-fed crops. Located in the western rim of the North West Pacific Ocean, this tropical island country is very vulnerable to tropical cyclones that lead to severe flooding events. Recently, satellite-based precipitation estimates have improved significantly and can serve as alternatives to ground-based observations. These data can be used to fill data gaps not only for climatic studies, but can also be utilized for disaster risk reduction and management activities. This study characterized the statistical errors of daily precipitation from four satellite-based rainfall products from (1) the Tropical Rainfall Measuring Mission (TRMM), (2) the CPC Morphing technique (CMORPH) of NOAA and (3) the Global Satellite Mapping of Precipitation (GSMAP) and (4) Precipitation Estimation from Remotely Sensed information using Artificial Neural Networks (PERSIANN). Precipitation data were compared to 52 synoptic weather stations located all over the Philippines. Results show GSMAP to have over all lower bias and CMORPH with lowest Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). In addition, a dichotomous rainfall test reveals GSMAP and CMORPH have low Proportion Correct (PC) for convective and stratiform rainclouds, respectively. TRMM consistently showed high PC for almost all raincloud types. Moreover, all four satellite precipitation showed high Correct Negatives (CN) values for the north-western part of the country during the North-East monsoon and spring monsoonal transition periods.

  2. Criterion Predictability: Identifying Differences Between [r-squares

    ERIC Educational Resources Information Center

    Malgady, Robert G.

    1976-01-01

    An analysis of variance procedure for testing differences in r-squared, the coefficient of determination, across independent samples is proposed and briefly discussed. The principal advantage of the procedure is to minimize Type I error for follow-up tests of pairwise differences. (Author/JKS)

  3. Long-term neuromuscular training and ankle joint position sense.

    PubMed

    Kynsburg, A; Pánics, G; Halasi, T

    2010-06-01

    Preventive effect of proprioceptive training is proven by decreasing injury incidence, but its proprioceptive mechanism is not. Major hypothesis: the training has a positive long-term effect on ankle joint position sense in athletes of a high-risk sport (handball). Ten elite-level female handball-players represented the intervention group (training-group), 10 healthy athletes of other sports formed the control-group. Proprioceptive training was incorporated into the regular training regimen of the training-group. Ankle joint position sense function was measured with the "slope-box" test, first described by Robbins et al. Testing was performed one day before the intervention and 20 months later. Mean absolute estimate errors were processed for statistical analysis. Proprioceptive sensory function improved regarding all four directions with a high significance (p<0.0001; avg. mean estimate error improvement: 1.77 degrees). This was also highly significant (p< or =0.0002) in each single directions, with avg. mean estimate error improvement between 1.59 degrees (posterior) and 2.03 degrees (anterior). Mean absolute estimate errors at follow-up (2.24 degrees +/-0.88 degrees) were significantly lower than in uninjured controls (3.29 degrees +/-1.15 degrees) (p<0.0001). Long-term neuromuscular training has improved ankle joint position sense function in the investigated athletes. This joint position sense improvement can be one of the explanations for injury rate reduction effect of neuromuscular training.

  4. Dual-wavelengths photoacoustic temperature measurement

    NASA Astrophysics Data System (ADS)

    Liao, Yu; Jian, Xiaohua; Dong, Fenglin; Cui, Yaoyao

    2017-02-01

    Thermal therapy is an approach applied in cancer treatment by heating local tissue to kill the tumor cells, which requires a high sensitivity of temperature monitoring during therapy. Current clinical methods like fMRI near infrared or ultrasound for temperature measurement still have limitations on penetration depth or sensitivity. Photoacoustic temperature sensing is a newly developed temperature sensing method that has a potential to be applied in thermal therapy, which usually employs a single wavelength laser for signal generating and temperature detecting. Because of the system disturbances including laser intensity, ambient temperature and complexity of target, the accidental errors of measurement is unavoidable. For solving these problems, we proposed a new method of photoacoustic temperature sensing by using two wavelengths to reduce random error and increase the measurement accuracy in this paper. Firstly a brief theoretical analysis was deduced. Then in the experiment, a temperature measurement resolution of about 1° in the range of 23-48° in ex vivo pig blood was achieved, and an obvious decrease of absolute error was observed with averagely 1.7° in single wavelength pattern while nearly 1° in dual-wavelengths pattern. The obtained results indicates that dual-wavelengths photoacoustic sensing of temperature is able to reduce random error and improve accuracy of measuring, which could be a more efficient method for photoacoustic temperature sensing in thermal therapy of tumor.

  5. Bilateral Proprioceptive Evaluation in Individuals With Unilateral Chronic Ankle Instability

    PubMed Central

    Sousa, Andreia S. P.; Leite, João; Costa, Bianca; Santos, Rubim

    2017-01-01

    Context: Despite extensive research on chronic ankle instability, the findings regarding proprioception have been conflicting and focused only on the injured limb. Also, the different components of proprioception have been evaluated in isolation. Objective: To evaluate bilateral ankle proprioception in individuals with unilateral ankle instability. Design: Cohort study. Setting: Research laboratory center in a university. Patients or Other Participants: Twenty-four individuals with a history of unilateral ankle sprain and chronic ankle instability (mechanical ankle instability group, n = 10; functional ankle instability [FAI] group, n = 14) and 20 controls. Main Outcome Measure(s): Ankle active and passive joint position sense, kinesthesia, and force sense. Results: We observed a significant interaction between the effects of limb and group for kinesthesia (F = 3.27, P = .049). Increased error values were observed in the injured limb of the FAI group compared with the control group (P = .031, Cohen d = 0.47). Differences were also evident for force sense (F = 9.31, P < .001): the FAI group demonstrated increased error versus the control group (injured limb: P < .001, Cohen d = 1.28; uninjured limb: P = .009, Cohen d = 0.89) and the mechanical ankle instability group (uninjured limb: P = .023, Cohen d = 0.76). Conclusions: Individuals with unilateral FAI had increased error ipsilaterally (injured limb) for inversion movement detection (kinesthesia) and evertor force sense and increased error contralaterally (uninjured limb) for evertor force sense. PMID:28318316

  6. Performance of the Generalized S-X[squared] Item Fit Index for the Graded Response Model

    ERIC Educational Resources Information Center

    Kang, Taehoon; Chen, Troy T.

    2011-01-01

    The utility of Orlando and Thissen's ("2000", "2003") S-X[squared] fit index was extended to the model-fit analysis of the graded response model (GRM). The performance of a modified S-X[squared] in assessing item-fit of the GRM was investigated in light of empirical Type I error rates and power with a simulation study having…

  7. Prototyping an Early-warning System for Rainfall-triggered Landslides on a Regional Scale Using a Physically-based Model and Remote Sensing Datasets

    NASA Astrophysics Data System (ADS)

    Liao, Z.; Hong, Y.; Kirschbaum, D. B.; Fukuoka, H.; Sassa, K.; Karnawati, D.; Fathani, F.

    2010-12-01

    Recent advancements in the availability of remotely sensed datasets provide an opportunity to advance the predictability of rainfall-triggered landslides at larger spatial scales. An early-warning system based on a physical landslide model and remote sensing information is used to simulate the dynamical response of the soil water content to the spatiotemporal variability of rainfall in complex terrain. The system utilizes geomorphologic datasets including a 30-meter ASTER DEM, a 1-km downscaled FAO soil map, and satellite-based Tropical Rainfall Measuring Mission (TRMM) precipitation. The applied physical model SLIDE (SLope-Infiltration-Distributed Equilibrium) defines a direct relationship between a factor of safety and the rainfall depth on an infinite slope. This prototype model is applied to a case study in Honduras during Hurricane Mitch in 1998 and a secondary case of typhoon-induced shallow landslides over Java Island, Indonesia. In Honduras, two study areas were selected which cover approximately 1,200 square kilometers and where a high density of shallow landslides occurred. The results were quantitatively evaluated using landslide inventory data compiled by the United States Geological Survey (USGS) following Hurricane Mitch, and show a good agreement between the modeling results and observations. The success rate for accurately estimating slope failure locations reached as high as 78% and 75%, while the error indices were 35% and 49%, respectively for each of the two selected study areas. Advantages and limitations of this application are discussed with respect to future assessment and challenges of performing a slope-stability estimation using coarse data at 1200 square kilometers. In Indonesia, the system has been applied over the whole Java Island. The prototyped early-warning system has been enhanced by integration of a susceptibility mapping and a precipitation forecasting model (i.e. Weather Research Forecast). The performance has been evaluated using a local landslide inventory, and results show that the system successfully predicted landslides in correspondence to the time of occurrence of the real landslide events in this case.

  8. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Mohamed, Heba M.

    2016-01-01

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.

  9. An Interactive Computer Package for Use with Simulation Models Which Performs Multidimensional Sensitivity Analysis by Employing the Techniques of Response Surface Methodology.

    DTIC Science & Technology

    1984-12-01

    total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni

  10. RM2: rms error comparisons

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1976-01-01

    The root-mean-square error performance measure is used to compare the relative performance of several widely known source coding algorithms with the RM2 image data compression system. The results demonstrate that RM2 has a uniformly significant performance advantage.

  11. M-estimation for robust sparse unmixing of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Toomik, Maria; Lu, Shijian; Nelson, James D. B.

    2016-10-01

    Hyperspectral unmixing methods often use a conventional least squares based lasso which assumes that the data follows the Gaussian distribution. The normality assumption is an approximation which is generally invalid for real imagery data. We consider a robust (non-Gaussian) approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers and relaxes the linearity assumption. The method consists of several appropriate penalties. We propose to use an lp norm with 0 < p < 1 in the sparse regression problem, which induces more sparsity in the results, but makes the problem non-convex. On the other hand, the problem, though non-convex, can be solved quite straightforwardly with an extensible algorithm based on iteratively reweighted least squares. To deal with the huge size of modern spectral libraries we introduce a library reduction step, similar to the multiple signal classification (MUSIC) array processing algorithm, which not only speeds up unmixing but also yields superior results. In the hyperspectral setting we extend the traditional least squares method to the robust heavy-tailed case and propose a generalised M-lasso solution. M-estimation replaces the Gaussian likelihood with a fixed function ρ(e) that restrains outliers. The M-estimate function reduces the effect of errors with large amplitudes or even assigns the outliers zero weights. Our experimental results on real hyperspectral data show that noise with large amplitudes (outliers) often exists in the data. This ability to mitigate the influence of such outliers can therefore offer greater robustness. Qualitative hyperspectral unmixing results on real hyperspectral image data corroborate the efficacy of the proposed method.

  12. Cervicocephalic kinesthetic sensibility in young and middle-aged adults with or without a history of mild neck pain.

    PubMed

    Teng, C-C; Chai, H; Lai, D-M; Wang, S-F

    2007-02-01

    Previous research has shown that there is no significant relationship between the degree of structural degeneration of the cervical spine and neck pain. We therefore sought to investigate the potential role of sensory dysfunction in chronic neck pain. Cervicocephalic kinesthetic sensibility, expressed by how accurately an individual can reposition the head, was studied in three groups of individuals, a control group of 20 asymptomatic young adults and two groups of middle-aged adults (20 subjects in each group) with or without a history of mild neck pain. An ultrasound-based three-dimensional coordinate measuring system was used to measure the position of the head and to test the accuracy of repositioning. Constant error (indicating that the subject overshot or undershot the intended position) and root mean square errors (representing total errors of accuracy and variability) were measured during repositioning of the head to the neutral head position (Head-to-NHP) and repositioning of the head to the target (Head-to-Target) in three cardinal planes (sagittal, transverse, and frontal). Analysis of covariance (ANCOVA) was used to test the group effect, with age used as a covariate. The constant errors during repositioning from a flexed position and from an extended position to the NHP were significantly greater in the middle-aged subjects than in the control group (beta=0.30 and beta=0.60, respectively; P<0.05 for both). In addition, the root mean square errors during repositioning from a flexed or extended position to the NHP were greater in the middle-aged subjects than in the control group (beta=0.27 and beta=0.49, respectively; P<0.05 for both). The root mean square errors also increased during Head-to-Target in left rotation (beta=0.24;P<0.05), but there was no difference in the constant errors or root mean square errors during Head-to-NHP repositioning from other target positions (P>0.05). The results indicate that, after controlling for age as a covariate, there was no group effect. Thus, age appears to have a profound effect on an individual's ability to accurately reposition the head toward the neutral position in the sagittal plane and repositioning the head toward left rotation. A history of mild chronic neck pain alone had no significant effect on cervicocephalic kinesthetic sensibility.

  13. Multi-scale remote sensing sagebrush characterization with regression trees over Wyoming, USA: laying a foundation for monitoring

    USGS Publications Warehouse

    Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Schell, Spencer J.

    2012-01-01

    agebrush ecosystems in North America have experienced extensive degradation since European settlement. Further degradation continues from exotic invasive plants, altered fire frequency, intensive grazing practices, oil and gas development, and climate change – adding urgency to the need for ecosystem-wide understanding. Remote sensing is often identified as a key information source to facilitate ecosystem-wide characterization, monitoring, and analysis; however, approaches that characterize sagebrush with sufficient and accurate local detail across large enough areas to support this paradigm are unavailable. We describe the development of a new remote sensing sagebrush characterization approach for the state of Wyoming, U.S.A. This approach integrates 2.4 m QuickBird, 30 m Landsat TM, and 56 m AWiFS imagery into the characterization of four primary continuous field components including percent bare ground, percent herbaceous cover, percent litter, and percent shrub, and four secondary components including percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata Wyomingensis), and shrub height using a regression tree. According to an independent accuracy assessment, primary component root mean square error (RMSE) values ranged from 4.90 to 10.16 for 2.4 m QuickBird, 6.01 to 15.54 for 30 m Landsat, and 6.97 to 16.14 for 56 m AWiFS. Shrub and herbaceous components outperformed the current data standard called LANDFIRE, with a shrub RMSE value of 6.04 versus 12.64 and a herbaceous component RMSE value of 12.89 versus 14.63. This approach offers new advancements in sagebrush characterization from remote sensing and provides a foundation to quantitatively monitor these components into the future.

  14. Joint 6D k-q Space Compressed Sensing for Accelerated High Angular Resolution Diffusion MRI.

    PubMed

    Cheng, Jian; Shen, Dinggang; Basser, Peter J; Yap, Pew-Thian

    2015-01-01

    High Angular Resolution Diffusion Imaging (HARDI) avoids the Gaussian. diffusion assumption that is inherent in Diffusion Tensor Imaging (DTI), and is capable of characterizing complex white matter micro-structure with greater precision. However, HARDI methods such as Diffusion Spectrum Imaging (DSI) typically require significantly more signal measurements than DTI, resulting in prohibitively long scanning times. One of the goals in HARDI research is therefore to improve estimation of quantities such as the Ensemble Average Propagator (EAP) and the Orientation Distribution Function (ODF) with a limited number of diffusion-weighted measurements. A popular approach to this problem, Compressed Sensing (CS), affords highly accurate signal reconstruction using significantly fewer (sub-Nyquist) data points than required traditionally. Existing approaches to CS diffusion MRI (CS-dMRI) mainly focus on applying CS in the q-space of diffusion signal measurements and fail to take into consideration information redundancy in the k-space. In this paper, we propose a framework, called 6-Dimensional Compressed Sensing diffusion MRI (6D-CS-dMRI), for reconstruction of the diffusion signal and the EAP from data sub-sampled in both 3D k-space and 3D q-space. To our knowledge, 6D-CS-dMRI is the first work that applies compressed sensing in the full 6D k-q space and reconstructs the diffusion signal in the full continuous q-space and the EAP in continuous displacement space. Experimental results on synthetic and real data demonstrate that, compared with full DSI sampling in k-q space, 6D-CS-dMRI yields excellent diffusion signal and EAP reconstruction with low root-mean-square error (RMSE) using 11 times less samples (3-fold reduction in k-space and 3.7-fold reduction in q-space).

  15. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  16. Quantitative Modelling of Trace Elements in Hard Coal.

    PubMed

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross-validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment.

  17. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  18. Quantitative Modelling of Trace Elements in Hard Coal

    PubMed Central

    Smoliński, Adam; Howaniec, Natalia

    2016-01-01

    The significance of coal in the world economy remains unquestionable for decades. It is also expected to be the dominant fossil fuel in the foreseeable future. The increased awareness of sustainable development reflected in the relevant regulations implies, however, the need for the development and implementation of clean coal technologies on the one hand, and adequate analytical tools on the other. The paper presents the application of the quantitative Partial Least Squares method in modeling the concentrations of trace elements (As, Ba, Cd, Co, Cr, Cu, Mn, Ni, Pb, Rb, Sr, V and Zn) in hard coal based on the physical and chemical parameters of coal, and coal ash components. The study was focused on trace elements potentially hazardous to the environment when emitted from coal processing systems. The studied data included 24 parameters determined for 132 coal samples provided by 17 coal mines of the Upper Silesian Coal Basin, Poland. Since the data set contained outliers, the construction of robust Partial Least Squares models for contaminated data set and the correct identification of outlying objects based on the robust scales were required. These enabled the development of the correct Partial Least Squares models, characterized by good fit and prediction abilities. The root mean square error was below 10% for all except for one the final Partial Least Squares models constructed, and the prediction error (root mean square error of cross–validation) exceeded 10% only for three models constructed. The study is of both cognitive and applicative importance. It presents the unique application of the chemometric methods of data exploration in modeling the content of trace elements in coal. In this way it contributes to the development of useful tools of coal quality assessment. PMID:27438794

  19. Upper Kalamazoo watershed land cover inventory. [based on remote sensing

    NASA Technical Reports Server (NTRS)

    Richason, B., III; Enslin, W.

    1973-01-01

    Approximately 1000 square miles of the eastern portion of the watershed were inventoried based on remote sensing imagery. The classification scheme, imagery and interpretation procedures, and a cost analysis are discussed. The distributions of land cover within the area are tabulated.

  20. Evaluating Remotely-Sensed Surface Soil Moisture Estimates Using Triple Collocation

    USDA-ARS?s Scientific Manuscript database

    Recent work has demonstrated the potential of enhancing remotely-sensed surface soil moisture validation activities through the application of triple collocation techniques which compare time series of three mutually independent geophysical variable estimates in order to acquire the root-mean-square...

  1. A Combined SRTM Digital Elevation Model for Zanjan State of Iran Based on the Corrective Surface Idea

    NASA Astrophysics Data System (ADS)

    Kiamehr, Ramin

    2016-04-01

    One arc-second high resolution version of the SRTM model recently published for the Iran by the US Geological Survey database. Digital Elevation Models (DEM) is widely used in different disciplines and applications by geoscientist. It is an essential data in geoid computation procedure, e.g., to determine the topographic, downward continuation (DWC) and atmospheric corrections. Also, it can be used in road location and design in civil engineering and hydrological analysis. However, a DEM is only a model of the elevation surface and it is subject to errors. The most important parts of errors could be comes from the bias in height datum. On the other hand, the accuracy of DEM is usually published in global sense and it is important to have estimation about the accuracy in the area of interest before using of it. One of the best methods to have a reasonable indication about the accuracy of DEM is obtained from the comparison of their height versus the precise national GPS/levelling data. It can be done by the determination of the Root-Mean-Square (RMS) of fitting between the DEM and leveling heights. The errors in the DEM can be approximated by different kinds of functions in order to fit the DEMs to a set of GPS/levelling data using the least squares adjustment. In the current study, several models ranging from a simple linear regression to seven parameter similarity transformation model are used in fitting procedure. However, the seven parameter model gives the best fitting with minimum standard division in all selected DEMs in the study area. Based on the 35 precise GPS/levelling data we obtain a RMS of 7 parameter fitting for SRTM DEM 5.5 m, The corrective surface model in generated based on the transformation parameters and included to the original SRTM model. The result of fitting in combined model is estimated again by independent GPS/leveling data. The result shows great improvement in absolute accuracy of the model with the standard deviation of 3.4 meter.

  2. Effect of different head-neck-jaw postures on cervicocephalic kinesthetic sense

    PubMed Central

    Zafar, Hamayun; Alghadir, Ahmad H.; Iqbal, Zaheen A.

    2017-01-01

    Objectives: To investigate the effect of different induced head-neck-jaw postures on head-neck relocation error among healthy subjects. Methods: 30 healthy adult male subjects participated in this study. Cervicocephalic kinesthetic sense was measured while standing, habitual sitting, habitual sitting with clenched jaw and habitual sitting with forward head posture during right rotation, left rotation, flexion and extension using kinesthetic sensibility test. Results: Head-neck relocation error was least while standing, followed by habitual sitting, habitual sitting with forward head posture and habitual sitting with jaw clenched. However, there was no significant difference in error between different tested postures during all the movements. Conclusions: To the best of our knowledge, this is the first study to see the effect of different induced head-neck-jaw postures on head-neck position sense among healthy subjects. Assuming a posture for a short duration of time doesn’t affect head-neck relocation error in normal healthy subjects. PMID:29199196

  3. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  4. Determining particle size and water content by near-infrared spectroscopy in the granulation of naproxen sodium.

    PubMed

    Bär, David; Debus, Heiko; Brzenczek, Sina; Fischer, Wolfgang; Imming, Peter

    2018-03-20

    Near-infrared spectroscopy is frequently used by the pharmaceutical industry to monitor and optimize several production processes. In combination with chemometrics, a mathematical-statistical technique, the following advantages of near-infrared spectroscopy can be applied: It is a fast, non-destructive, non-invasive, and economical analytical method. One of the most advanced and popular chemometric technique is the partial least square algorithm with its best applicability in routine and its results. The required reference analytic enables the analysis of various parameters of interest, for example, moisture content, particle size, and many others. Parameters like the correlation coefficient, root mean square error of prediction, root mean square error of calibration, and root mean square error of validation have been used for evaluating the applicability and robustness of these analytical methods developed. This study deals with investigating a Naproxen Sodium granulation process using near-infrared spectroscopy and the development of water content and particle-size methods. For the water content method, one should consider a maximum water content of about 21% in the granulation process, which must be confirmed by the loss on drying. Further influences to be considered are the constantly changing product temperature, rising to about 54 °C, the creation of hydrated states of Naproxen Sodium when using a maximum of about 21% water content, and the large quantity of about 87% Naproxen Sodium in the formulation. It was considered to use a combination of these influences in developing the near-infrared spectroscopy method for the water content of Naproxen Sodium granules. The "Root Mean Square Error" was 0.25% for calibration dataset and 0.30% for the validation dataset, which was obtained after different stages of optimization by multiplicative scatter correction and the first derivative. Using laser diffraction, the granules have been analyzed for particle sizes and obtaining the summary sieve sizes of >63 μm and >100 μm. The following influences should be considered for application in routine production: constant changes in water content up to 21% and a product temperature up to 54 °C. The different stages of optimization result in a "Root Mean Square Error" of 2.54% for the calibration data set and 3.53% for the validation set by using the Kubelka-Munk conversion and first derivative for the near-infrared spectroscopy method for a particle size >63 μm. For the near-infrared spectroscopy method using a particle size >100 μm, the "Root Mean Square Error" was 3.47% for the calibration data set and 4.51% for the validation set, while using the same pre-treatments. - The robustness and suitability of this methodology has already been demonstrated by its recent successful implementation in a routine granulate production process. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. The Influence of Dimensionality on Estimation in the Partial Credit Model.

    ERIC Educational Resources Information Center

    De Ayala, R. J.

    1995-01-01

    The effect of multidimensionality on partial credit model parameter estimation was studied with noncompensatory and compensatory data. Analysis results, consisting of root mean square error bias, Pearson product-moment corrections, standardized root mean squared differences, standardized differences between means, and descriptive statistics…

  6. Smart Sound Processing for Defect Sizing in Pipelines Using EMAT Actuator Based Multi-Frequency Lamb Waves

    PubMed Central

    García-Gómez, Joaquín; Rosa-Zurera, Manuel; Romero-Camacho, Antonio; Jiménez-Garrido, Jesús Antonio; García-Benavides, Víctor

    2018-01-01

    Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT) actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP). These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE). PMID:29518927

  7. The Effects of Cryotherapy on Knee Joint Position Sense and Force Production Sense in Healthy Individuals

    PubMed Central

    Furmanek, Mariusz P.; Słomka, Kajetan J.; Sobiesiak, Andrzej; Rzepko, Marian; Juras, Grzegorz

    2018-01-01

    Abstract The proprioceptive information received from mechanoreceptors is potentially responsible for controlling the joint position and force differentiation. However, it is unknown whether cryotherapy influences this complex mechanism. Previously reported results are not universally conclusive and sometimes even contradictory. The main objective of this study was to investigate the impact of local cryotherapy on knee joint position sense (JPS) and force production sense (FPS). The study group consisted of 55 healthy participants (age: 21 ± 2 years, body height: 171.2 ± 9 cm, body mass: 63.3 ± 12 kg, BMI: 21.5 ± 2.6). Local cooling was achieved with the use of gel-packs cooled to -2 ± 2.5°C and applied simultaneously over the knee joint and the quadriceps femoris muscle for 20 minutes. JPS and FPS were evaluated using the Biodex System 4 Pro apparatus. Repeated measures analysis of variance (ANOVA) did not show any statistically significant changes of the JPS and FPS under application of cryotherapy for all analyzed variables: the JPS’s absolute error (p = 0.976), its relative error (p = 0.295), and its variable error (p = 0.489); the FPS’s absolute error (p = 0.688), its relative error (p = 0.193), and its variable error (p = 0.123). The results indicate that local cooling does not affect proprioceptive acuity of the healthy knee joint. They also suggest that local limited cooling before physical activity at low velocity did not present health or injury risk in this particular study group. PMID:29599858

  8. Modeling Water Temperature in the Yakima River, Washington, from Roza Diversion Dam to Prosser Dam, 2005-06

    USGS Publications Warehouse

    Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.

    2008-01-01

    A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).

  9. The modelling of lead removal from water by deep eutectic solvents functionalized CNTs: artificial neural network (ANN) approach.

    PubMed

    Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed

    2017-11-01

    The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.

  10. Modeling Aboveground Biomass in Hulunber Grassland Ecosystem by Using Unmanned Aerial Vehicle Discrete Lidar

    PubMed Central

    Wang, Dongliang; Xin, Xiaoping; Shao, Quanqin; Brolly, Matthew; Zhu, Zhiliang; Chen, Jin

    2017-01-01

    Accurate canopy structure datasets, including canopy height and fractional cover, are required to monitor aboveground biomass as well as to provide validation data for satellite remote sensing products. In this study, the ability of an unmanned aerial vehicle (UAV) discrete light detection and ranging (lidar) was investigated for modeling both the canopy height and fractional cover in Hulunber grassland ecosystem. The extracted mean canopy height, maximum canopy height, and fractional cover were used to estimate the aboveground biomass. The influences of flight height on lidar estimates were also analyzed. The main findings are: (1) the lidar-derived mean canopy height is the most reasonable predictor of aboveground biomass (R2 = 0.340, root-mean-square error (RMSE) = 81.89 g·m−2, and relative error of 14.1%). The improvement of multiple regressions to the R2 and RMSE values is unobvious when adding fractional cover in the regression since the correlation between mean canopy height and fractional cover is high; (2) Flight height has a pronounced effect on the derived fractional cover and details of the lidar data, but the effect is insignificant on the derived canopy height when the flight height is within the range (<100 m). These findings are helpful for modeling stable regressions to estimate grassland biomass using lidar returns. PMID:28106819

  11. Modeling Aboveground Biomass in Hulunber Grassland Ecosystem by Using Unmanned Aerial Vehicle Discrete Lidar.

    PubMed

    Wang, Dongliang; Xin, Xiaoping; Shao, Quanqin; Brolly, Matthew; Zhu, Zhiliang; Chen, Jin

    2017-01-19

    Accurate canopy structure datasets, including canopy height and fractional cover, are required to monitor aboveground biomass as well as to provide validation data for satellite remote sensing products. In this study, the ability of an unmanned aerial vehicle (UAV) discrete light detection and ranging (lidar) was investigated for modeling both the canopy height and fractional cover in Hulunber grassland ecosystem. The extracted mean canopy height, maximum canopy height, and fractional cover were used to estimate the aboveground biomass. The influences of flight height on lidar estimates were also analyzed. The main findings are: (1) the lidar-derived mean canopy height is the most reasonable predictor of aboveground biomass ( R ² = 0.340, root-mean-square error (RMSE) = 81.89 g·m -2 , and relative error of 14.1%). The improvement of multiple regressions to the R ² and RMSE values is unobvious when adding fractional cover in the regression since the correlation between mean canopy height and fractional cover is high; (2) Flight height has a pronounced effect on the derived fractional cover and details of the lidar data, but the effect is insignificant on the derived canopy height when the flight height is within the range (<100 m). These findings are helpful for modeling stable regressions to estimate grassland biomass using lidar returns.

  12. Improved parallel image reconstruction using feature refinement.

    PubMed

    Cheng, Jing; Jia, Sen; Ying, Leslie; Liu, Yuanyuan; Wang, Shanshan; Zhu, Yanjie; Li, Ye; Zou, Chao; Liu, Xin; Liang, Dong

    2018-07-01

    The aim of this study was to develop a novel feature refinement MR reconstruction method from highly undersampled multichannel acquisitions for improving the image quality and preserve more detail information. The feature refinement technique, which uses a feature descriptor to pick up useful features from residual image discarded by sparsity constrains, is applied to preserve the details of the image in compressed sensing and parallel imaging in MRI (CS-pMRI). The texture descriptor and structure descriptor recognizing different types of features are required for forming the feature descriptor. Feasibility of the feature refinement was validated using three different multicoil reconstruction methods on in vivo data. Experimental results show that reconstruction methods with feature refinement improve the quality of reconstructed image and restore the image details more accurately than the original methods, which is also verified by the lower values of the root mean square error and high frequency error norm. A simple and effective way to preserve more useful detailed information in CS-pMRI is proposed. This technique can effectively improve the reconstruction quality and has superior performance in terms of detail preservation compared with the original version without feature refinement. Magn Reson Med 80:211-223, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Integrating seasonal optical and thermal infrared spectra to characterize urban impervious surfaces with extreme spectral complexity: a Shanghai case study

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Yao, Xinfeng; Ji, Minhe

    2016-01-01

    Despite recent rapid advancement in remote sensing technology, accurate mapping of the urban landscape in China still faces a great challenge due to unusually high spectral complexity in many big cities. Much of this complication comes from severe spectral confusion of impervious surfaces with polluted water bodies and bright bare soils. This paper proposes a two-step land cover decomposition method, which combines optical and thermal spectra from different seasons to cope with the issue of urban spectral complexity. First, a linear spectral mixture analysis was employed to generate fraction images for three preliminary endmembers (high albedo, low albedo, and vegetation). Seasonal change analysis on land surface temperature induced from thermal infrared spectra and coarse component fractions obtained from the first step was then used to reduce the confusion between impervious surfaces and nonimpervious materials. This method was tested with two-date Landsat multispectral data in Shanghai, one of China's megacities. The results showed that the method was capable of consistently estimating impervious surfaces in highly complex urban environments with an accuracy of R2 greater than 0.70 and both root mean square error and mean average error less than 0.20 for all test sites. This strategy seemed very promising for landscape mapping of complex urban areas.

  14. Evaluation of Light Detection and Ranging (LIDAR) for measuring river corridor topography

    USGS Publications Warehouse

    Bowen, Z.H.; Waltermire, R.G.

    2002-01-01

    LIDAR is relatively new in the commercial market for remote sensing of topography and it is difficult to find objective reporting on the accuracy of LIDAR measurements in an applied context. Accuracy specifications for LIDAR data in published evaluations range from 1 to 2 m root mean square error (RMSEx,y) and 15 to 20 cm RMSEz. Most of these estimates are based on measurements over relatively flat, homogeneous terrain. This study evaluated the accuracy of one LIDAR data set over a range of terrain types in a western river corridor. Elevation errors based on measurements over all terrain types were larger (RMSEz equals 43 cm) than values typically reported. This result is largely attributable to horizontal positioning limitations (1 to 2 m RMSEx,y) in areas with variable terrain and large topographic relief. Cross-sectional profiles indicated algorithms that were effective for removing vegetation in relatively flat terrain were less effective near the active channel where dense vegetation was found in a narrow band along a low terrace. LIDAR provides relatively accurate data at densities (50,000 to 100,000 points per km2) not feasible with other survey technologies. Other options for projects requiring higher accuracy include low-altitude aerial photography and intensive ground surveying.

  15. Joint retrievals of cloud and drizzle in marine boundary layer clouds using ground-based radar, lidar and zenith radiances

    DOE PAGES

    Fielding, M. D.; Chiu, J. C.; Hogan, R. J.; ...

    2015-07-02

    Active remote sensing of marine boundary-layer clouds is challenging as drizzle drops often dominate the observed radar reflectivity. We present a new method to simultaneously retrieve cloud and drizzle vertical profiles in drizzling boundary-layer clouds using surface-based observations of radar reflectivity, lidar attenuated backscatter, and zenith radiances under conditions when precipitation does not reach the surface. Specifically, the vertical structure of droplet size and water content of both cloud and drizzle is characterised throughout the cloud. An ensemble optimal estimation approach provides full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievalsmore » using synthetic measurements from large-eddy simulation snapshots of cumulus under stratocumulus, where cloud water path is retrieved with an error of 31 g m -2. The method also performs well in non-drizzling clouds where no assumption of the cloud profile is required. We then apply the method to observations of marine stratocumulus obtained during the Atmospheric Radiation Measurement MAGIC deployment in the Northeast Pacific. Here, retrieved cloud water path agrees well with independent three-channel microwave radiometer retrievals, with a root mean square difference of 10–20 g m -2.« less

  16. Bathymetric mapping of submarine sand waves using multiangle sun glitter imagery: a case of the Taiwan Banks with ASTER stereo imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Hua-guo; Yang, Kang; Lou, Xiu-lin; Li, Dong-ling; Shi, Ai-qin; Fu, Bin

    2015-01-01

    Submarine sand waves are visible in optical sun glitter remote sensing images and multiangle observations can provide valuable information. We present a method for bathymetric mapping of submarine sand waves using multiangle sun glitter information from Advanced Spaceborne Thermal Emission and Reflection Radiometer stereo imagery. Based on a multiangle image geometry model and a sun glitter radiance transfer model, sea surface roughness is derived using multiangle sun glitter images. These results are then used for water depth inversions based on the Alpers-Hennings model, supported by a few true depth data points (sounding data). Case study results show that the inversion and true depths match well, with high-correlation coefficients and root-mean-square errors from 1.45 to 2.46 m, and relative errors from 5.48% to 8.12%. The proposed method has some advantages over previous methods in that it requires fewer true depth data points, it does not require environmental parameters or knowledge of sand-wave morphology, and it is relatively simple to operate. On this basis, we conclude that this method is effective in mapping submarine sand waves and we anticipate that it will also be applicable to other similar topography types.

  17. Magnetic-field sensing with quantum error detection under the effect of energy relaxation

    NASA Astrophysics Data System (ADS)

    Matsuzaki, Yuichiro; Benjamin, Simon

    2017-03-01

    A solid state spin is an attractive system with which to realize an ultrasensitive magnetic field sensor. A spin superposition state will acquire a phase induced by the target field, and we can estimate the field strength from this phase. Recent studies have aimed at improving sensitivity through the use of quantum error correction (QEC) to detect and correct any bit-flip errors that may occur during the sensing period. Here we investigate the performance of a two-qubit sensor employing QEC and under the effect of energy relaxation. Surprisingly, we find that the standard QEC technique to detect and recover from an error does not improve the sensitivity compared with the single-qubit sensors. This is a consequence of the fact that the energy relaxation induces both a phase-flip and a bit-flip noise where the former noise cannot be distinguished from the relative phase induced from the target fields. However, we have found that we can improve the sensitivity if we adopt postselection to discard the state when error is detected. Even when quantum error detection is moderately noisy, and allowing for the cost of the postselection technique, we find that this two-qubit system shows an advantage in sensing over a single qubit in the same conditions.

  18. Rapid Detection of Volatile Oil in Mentha haplocalyx by Near-Infrared Spectroscopy and Chemometrics.

    PubMed

    Yan, Hui; Guo, Cheng; Shao, Yang; Ouyang, Zhen

    2017-01-01

    Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . The effects of data pre-processing methods on the accuracy of the PLSR calibration models were investigated. The performance of the final model was evaluated according to the correlation coefficient ( R ) and root mean square error of prediction (RMSEP). For PLSR model, the best preprocessing method combination was first-order derivative, standard normal variate transformation (SNV), and mean centering, which had of 0.8805, of 0.8719, RMSEC of 0.091, and RMSEP of 0.097, respectively. The wave number variables linking to volatile oil are from 5500 to 4000 cm-1 by analyzing the loading weights and variable importance in projection (VIP) scores. For SVM model, six LVs (less than seven LVs in PLSR model) were adopted in model, and the result was better than PLSR model. The and were 0.9232 and 0.9202, respectively, with RMSEC and RMSEP of 0.084 and 0.082, respectively, which indicated that the predicted values were accurate and reliable. This work demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in M. haplocalyx . The quality of medicine directly links to clinical efficacy, thus, it is important to control the quality of Mentha haplocalyx . Near-infrared spectroscopy combined with partial least squares regression (PLSR) and support vector machine (SVM) was applied for the rapid determination of chemical component of volatile oil content in Mentha haplocalyx . For SVM model, 6 LVs (less than 7 LVs in PLSR model) were adopted in model, and the result was better than PLSR model. It demonstrated that near infrared reflectance spectroscopy with chemometrics could be used to rapidly detect the main content volatile oil in Mentha haplocalyx . Abbreviations used: 1 st der: First-order derivative; 2 nd der: Second-order derivative; LOO: Leave-one-out; LVs: Latent variables; MC: Mean centering, NIR: Near-infrared; NIRS: Near infrared spectroscopy; PCR: Principal component regression, PLSR: Partial least squares regression; RBF: Radial basis function; RMSEC: Root mean square error of cross validation, RMSEC: Root mean square error of calibration; RMSEP: Root mean square error of prediction; SNV: Standard normal variate transformation; SVM: Support vector machine; VIP: Variable Importance in projection.

  19. Evaluation of snow cover and snow depth on the Qinghai-Tibetan Plateau derived from passive microwave remote sensing

    NASA Astrophysics Data System (ADS)

    Dai, Liyun; Che, Tao; Ding, Yongjian; Hao, Xiaohua

    2017-08-01

    Snow cover on the Qinghai-Tibetan Plateau (QTP) plays a significant role in the global climate system and is an important water resource for rivers in the high-elevation region of Asia. At present, passive microwave (PMW) remote sensing data are the only efficient way to monitor temporal and spatial variations in snow depth at large scale. However, existing snow depth products show the largest uncertainties across the QTP. In this study, MODIS fractional snow cover product, point, line and intense sampling data are synthesized to evaluate the accuracy of snow cover and snow depth derived from PMW remote sensing data and to analyze the possible causes of uncertainties. The results show that the accuracy of snow cover extents varies spatially and depends on the fraction of snow cover. Based on the assumption that grids with MODIS snow cover fraction > 10 % are regarded as snow cover, the overall accuracy in snow cover is 66.7 %, overestimation error is 56.1 %, underestimation error is 21.1 %, commission error is 27.6 % and omission error is 47.4 %. The commission and overestimation errors of snow cover primarily occur in the northwest and southeast areas with low ground temperature. Omission error primarily occurs in cold desert areas with shallow snow, and underestimation error mainly occurs in glacier and lake areas. With the increase of snow cover fraction, the overestimation error decreases and the omission error increases. A comparison between snow depths measured in field experiments, measured at meteorological stations and estimated across the QTP shows that agreement between observation and retrieval improves with an increasing number of observation points in a PMW grid. The misclassification and errors between observed and retrieved snow depth are associated with the relatively coarse resolution of PMW remote sensing, ground temperature, snow characteristics and topography. To accurately understand the variation in snow depth across the QTP, new algorithms should be developed to retrieve snow depth with higher spatial resolution and should consider the variation in brightness temperatures at different frequencies emitted from ground with changing ground features.

  20. Number Sense Made Simple Using Number Patterns

    ERIC Educational Resources Information Center

    Su, Hui Fang Huang; Marinas, Carol; Furner, Joseph

    2011-01-01

    This article highlights investigating intriguing number patterns utilising an emerging technology called the Square Tool. Mathematics teachers of grades K-12 will find the Square Tool useful in making connections and bridging the gap from the concrete to the abstract. Pattern recognition helps students discover various mathematical concepts. With…

  1. Navy Fuel Composition and Screening Tool (FCAST) v2.8

    DTIC Science & Technology

    2016-05-10

    allowed us to develop partial least squares (PLS) models based on gas chromatography–mass spectrometry (GC-MS) data that predict fuel properties. The...Chemometric property modeling Partial least squares PLS Compositional profiler Naval Air Systems Command Air-4.4.5 Patuxent River Naval Air Station Patuxent...Cumulative predicted residual error sum of squares DiEGME Diethylene glycol monomethyl ether FCAST Fuel Composition and Screening Tool FFP Fit for

  2. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  3. Latin-square three-dimensional gage master

    DOEpatents

    Jones, L.

    1981-05-12

    A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.

  4. Latin square three dimensional gage master

    DOEpatents

    Jones, Lynn L.

    1982-01-01

    A gage master for coordinate measuring machines has an nxn array of objects distributed in the Z coordinate utilizing the concept of a Latin square experimental design. Using analysis of variance techniques, the invention may be used to identify sources of error in machine geometry and quantify machine accuracy.

  5. Effect of nonideal square-law detection on static calibration in noise-injection radiometers

    NASA Technical Reports Server (NTRS)

    Hearn, C. P.

    1984-01-01

    The effect of nonideal square-law detection on the static calibration for a class of Dicke radiometers is examined. It is shown that fourth-order curvature in the detection characteristic adds a nonlinear term to the linear calibration relationship normally ascribed to noise-injection, balanced Dicke radiometers. The minimum error, based on an optimum straight-line fit to the calibration curve, is derived in terms of the power series coefficients describing the input-output characteristics of the detector. These coefficients can be determined by simple measurements, and detection nonlinearity is, therefore, quantitatively related to radiometric measurement error.

  6. VizieR Online Data Catalog: delta Cep VEGA/CHARA observing log (Nardetto+, 2016)

    NASA Astrophysics Data System (ADS)

    Nardetto, N.; Merand, A.; Mourard, D.; Storm, J.; Gieren, W.; Fouque, P.; Gallenne, A.; Graczyk, D.; Kervella, P.; Neilson, H.; Pietrzynski, G.; Pilecki, B.; Breitfelder, J.; Berio, P.; Challouf, M.; Clausse, J.-M.; Ligi, R.; Mathias, P.; Meilland, A.; Perraut, K.; Poretti, E.; Rainer, M.; Spang, A.; Stee, P.; Tallon-Bosc, I.; Ten Brummelaar, T.

    2016-07-01

    The columns give, respectively, the date, the RJD, the hour angle (HA), the minimum and maximum wavelengths over which the squared visibility is calculated, the projected baseline length Bp and its orientation PA, the signal-to-noise ratio on the fringe peak; the last column provides the calibrated squared visibility V2 together with the statistic error on V2, and the systematic error on V2 (see text for details). The data are available on the Jean-Marie Mariotti Center OiDB service (Available at http://oidb.jmmc.fr). (1 data file).

  7. A network application for modeling a centrifugal compressor performance map

    NASA Astrophysics Data System (ADS)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  8. Tissue resistivity estimation in the presence of positional and geometrical uncertainties.

    PubMed

    Baysal, U; Eyüboğlu, B M

    2000-08-01

    Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.

  9. A nonlinear model of gold production in Malaysia

    NASA Astrophysics Data System (ADS)

    Ramli, Norashikin; Muda, Nora; Umor, Mohd Rozi

    2014-06-01

    Malaysia is a country which is rich in natural resources and one of it is a gold. Gold has already become an important national commodity. This study is conducted to determine a model that can be well fitted with the gold production in Malaysia from the year 1995-2010. Five nonlinear models are presented in this study which are Logistic model, Gompertz, Richard, Weibull and Chapman-Richard model. These model are used to fit the cumulative gold production in Malaysia. The best model is then selected based on the model performance. The performance of the fitted model is measured by sum squares error, root mean squares error, coefficient of determination, mean relative error, mean absolute error and mean absolute percentage error. This study has found that a Weibull model is shown to have significantly outperform compare to the other models. To confirm that Weibull is the best model, the latest data are fitted to the model. Once again, Weibull model gives the lowest readings at all types of measurement error. We can concluded that the future gold production in Malaysia can be predicted according to the Weibull model and this could be important findings for Malaysia to plan their economic activities.

  10. Comparison of structural and least-squares lines for estimating geologic relations

    USGS Publications Warehouse

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  11. Vibration-Induced Errors in MEMS Tuning Fork Gyroscopes with Imbalance.

    PubMed

    Fang, Xiang; Dong, Linxi; Zhao, Wen-Sheng; Yan, Haixia; Teh, Kwok Siong; Wang, Gaofeng

    2018-05-29

    This paper discusses the vibration-induced error in non-ideal MEMS tuning fork gyroscopes (TFGs). Ideal TFGs which are thought to be immune to vibrations do not exist, and imbalance between two gyros of TFGs is an inevitable phenomenon. Three types of fabrication imperfections (i.e., stiffness imbalance, mass imbalance, and damping imbalance) are studied, considering different imbalance radios. We focus on the coupling types of two gyros of TFGs in both drive and sense directions, and the vibration sensitivities of four TFG designs with imbalance are simulated and compared. It is found that non-ideal TFGs with two gyros coupled both in drive and sense directions (type CC TFGs) are the most insensitive to vibrations with frequencies close to the TFG operating frequencies. However, sense-axis vibrations with in-phase resonant frequencies of a coupled gyros system result in severe error outputs to TFGs with two gyros coupled in the sense direction, which is mainly attributed to the sense capacitance nonlinearity. With increasing stiffness coupled ratio of the coupled gyros system, the sensitivity to vibrations with operating frequencies is cut down, yet sensitivity to vibrations with in-phase frequencies is amplified.

  12. Wavefront error sensing

    NASA Technical Reports Server (NTRS)

    Tubbs, Eldred F.

    1986-01-01

    A two-step approach to wavefront sensing for the Large Deployable Reflector (LDR) was examined as part of an effort to define wavefront-sensing requirements and to determine particular areas for more detailed study. A Hartmann test for coarse alignment, particularly segment tilt, seems feasible if LDR can operate at 5 microns or less. The direct measurement of the point spread function in the diffraction limited region may be a way to determine piston error, but this can only be answered by a detailed software model of the optical system. The question of suitable astronomical sources for either test must also be addressed.

  13. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  14. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  15. Methods of automatic nucleotide-sequence analysis. Multicomponent spectrophotometric analysis of mixtures of nucleic acid components by a least-squares procedure

    PubMed Central

    Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.

    1965-01-01

    1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087

  16. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  17. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol.

    PubMed

    Yehia, Ali M; Mohamed, Heba M

    2016-01-05

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Magnotti, Gaetano

    2010-01-01

    The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.

  19. Nondestructive quantification of the soluble-solids content and the available acidity of apples by Fourier-transform near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Ying, Yibin; Liu, Yande; Tao, Yang

    2005-09-01

    This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r) 0.940 for the SSC and a moderate r of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSC and also showed the feasibility of using it to predict the VA of apples.

  20. Self-Evaluation of PANDA-FBG Based Sensing System for Dynamic Distributed Strain and Temperature Measurement.

    PubMed

    Zhu, Mengshi; Murayama, Hideaki; Wada, Daichi

    2017-10-12

    A novel method is introduced in this work for effectively evaluating the performance of the PANDA type polarization-maintaining fiber Bragg grating (PANDA-FBG) distributed dynamic strain and temperature sensing system. Conventionally, the errors during the measurement are unknown or evaluated by using other sensors such as strain gauge and thermocouples. This will make the sensing system complicated and decrease the efficiency since more than one kind of sensor is applied for the same measurand. In this study, we used the approximately constant ratio of primary errors in strain and temperature measurement and realized the self-evaluation of the sensing system, which can significantly enhance the applicability, as well as the reliability in strategy making.

  1. Rovibrational spectra of ammonia. I. Unprecedented accuracy of a potential energy surface used with nonadiabatic corrections.

    PubMed

    Huang, Xinchuan; Schwenke, David W; Lee, Timothy J

    2011-01-28

    In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.

  2. Extrapolation of in situ data from 1-km squares to adjacent squares using remote sensed imagery and airborne lidar data for the assessment of habitat diversity and extent.

    PubMed

    Lang, M; Vain, A; Bunce, R G H; Jongman, R H G; Raet, J; Sepp, K; Kuusemets, V; Kikas, T; Liba, N

    2015-03-01

    Habitat surveillance and subsequent monitoring at a national level is usually carried out by recording data from in situ sample sites located according to predefined strata. This paper describes the application of remote sensing to the extension of such field data recorded in 1-km squares to adjacent squares, in order to increase sample number without further field visits. Habitats were mapped in eight central squares in northeast Estonia in 2010 using a standardized recording procedure. Around one of the squares, a special study site was established which consisted of the central square and eight surrounding squares. A Landsat-7 Enhanced Thematic Mapper Plus (ETM+) image was used for correlation with in situ data. An airborne light detection and ranging (lidar) vegetation height map was also included in the classification. A series of tests were carried out by including the lidar data and contrasting analytical techniques, which are described in detail in the paper. Training accuracy in the central square varied from 75 to 100 %. In the extrapolation procedure to the surrounding squares, accuracy varied from 53.1 to 63.1 %, which improved by 10 % with the inclusion of lidar data. The reasons for this relatively low classification accuracy were mainly inherent variability in the spectral signatures of habitats but also differences between the dates of imagery acquisition and field sampling. Improvements could therefore be made by better synchronization of the field survey and image acquisition as well as by dividing general habitat categories (GHCs) into units which are more likely to have similar spectral signatures. However, the increase in the number of sample kilometre squares compensates for the loss of accuracy in the measurements of individual squares. The methodology can be applied in other studies as the procedures used are readily available.

  3. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  4. Grazing Incidence Wavefront Sensing and Verification of X-Ray Optics Performance

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Rohrbach, Scott; Zhang, William W.

    2011-01-01

    Evaluation of interferometrically measured mirror metrology data and characterization of a telescope wavefront can be powerful tools in understanding of image characteristics of an x-ray optical system. In the development of soft x-ray telescope for the International X-Ray Observatory (IXO), we have developed new approaches to support the telescope development process. Interferometrically measuring the optical components over all relevant spatial frequencies can be used to evaluate and predict the performance of an x-ray telescope. Typically, the mirrors are measured using a mount that minimizes the mount and gravity induced errors. In the assembly and mounting process the shape of the mirror segments can dramatically change. We have developed wavefront sensing techniques suitable for the x-ray optical components to aid us in the characterization and evaluation of these changes. Hartmann sensing of a telescope and its components is a simple method that can be used to evaluate low order mirror surface errors and alignment errors. Phase retrieval techniques can also be used to assess and estimate the low order axial errors of the primary and secondary mirror segments. In this paper we describe the mathematical foundation of our Hartmann and phase retrieval sensing techniques. We show how these techniques can be used in the evaluation and performance prediction process of x-ray telescopes.

  5. Red Wine Age Estimation by the Alteration of Its Color Parameters: Fourier Transform Infrared Spectroscopy as a Tool to Monitor Wine Maturation Time

    PubMed Central

    Basalekou, M.; Pappas, C.; Kotseridis, Y.; Tarantilis, P. A.; Kontaxakis, E.

    2017-01-01

    Color, phenolic content, and chemical age values of red wines made from Cretan grape varieties (Kotsifali, Mandilari) were evaluated over nine months of maturation in different containers for two vintages. The wines differed greatly on their anthocyanin profiles. Mid-IR spectra were also recorded with the use of a Fourier Transform Infrared Spectrophotometer in ZnSe disk mode. Analysis of Variance was used to explore the parameter's dependency on time. Determination models were developed for the chemical age indexes using Partial Least Squares (PLS) (TQ Analyst software) considering the spectral region 1830–1500 cm−1. The correlation coefficients (r) for chemical age index i were 0.86 for Kotsifali (Root Mean Square Error of Calibration (RMSEC) = 0.067, Root Mean Square Error of Prediction (RMSEP) = 0,115, and Root Mean Square Error of Validation (RMSECV) = 0.164) and 0.90 for Mandilari (RMSEC = 0.050, RMSEP = 0.040, and RMSECV = 0.089). For chemical age index ii the correlation coefficients (r) were 0.86 and 0.97 for Kotsifali (RMSEC 0.044, RMSEP = 0.087, and RMSECV = 0.214) and Mandilari (RMSEC = 0.024, RMSEP = 0.033, and RMSECV = 0.078), respectively. The proposed method is simpler, less time consuming, and more economical and does not require chemical reagents. PMID:29225994

  6. Application of near-infrared spectroscopy for the rapid quality assessment of Radix Paeoniae Rubra

    NASA Astrophysics Data System (ADS)

    Zhan, Hao; Fang, Jing; Tang, Liying; Yang, Hongjun; Li, Hua; Wang, Zhuju; Yang, Bin; Wu, Hongwei; Fu, Meihong

    2017-08-01

    Near-infrared (NIR) spectroscopy with multivariate analysis was used to quantify gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra, and the feasibility to classify the samples originating from different areas was investigated. A new high-performance liquid chromatography method was developed and validated to analyze gallic acid, catechin, albiflorin, and paeoniflorin in Radix Paeoniae Rubra as the reference. Partial least squares (PLS), principal component regression (PCR), and stepwise multivariate linear regression (SMLR) were performed to calibrate the regression model. Different data pretreatments such as derivatives (1st and 2nd), multiplicative scatter correction, standard normal variate, Savitzky-Golay filter, and Norris derivative filter were applied to remove the systematic errors. The performance of the model was evaluated according to the root mean square of calibration (RMSEC), root mean square error of prediction (RMSEP), root mean square error of cross-validation (RMSECV), and correlation coefficient (r). The results show that compared to PCR and SMLR, PLS had a lower RMSEC, RMSECV, and RMSEP and higher r for all the four analytes. PLS coupled with proper pretreatments showed good performance in both the fitting and predicting results. Furthermore, the original areas of Radix Paeoniae Rubra samples were partly distinguished by principal component analysis. This study shows that NIR with PLS is a reliable, inexpensive, and rapid tool for the quality assessment of Radix Paeoniae Rubra.

  7. Quick method (FT-NIR) for the determination of oil and major fatty acids content in whole achenes of milk thistle (Silybum marianum (L.) Gaertn.).

    PubMed

    Koláčková, Pavla; Růžičková, Gabriela; Gregor, Tomáš; Šišperová, Eliška

    2015-08-30

    Calibration models for the Fourier transform-near infrared (FT-NIR) instrument were developed for quick and non-destructive determination of oil and fatty acids in whole achenes of milk thistle. Samples with a range of oil and fatty acid levels were collected and their transmittance spectra were obtained by the FT-NIR instrument. Based on these spectra and data gained by the means of the reference method - Soxhlet extraction and gas chromatography (GC) - calibration models were created by means of partial least square (PLS) regression analysis. Precision and accuracy of the calibration models was verified via the cross-validation of validation samples whose spectra were not part of the calibration model and also according to the root mean square error of prediction (RMSEP), root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV) and the validation coefficient of determination (R(2) ). R(2) for whole seeds were 0.96, 0.96, 0.83 and 0.67 and the RMSEP values were 0.76, 1.68, 1.24, 0.54 for oil, linoleic (C18:2), oleic (C18:1) and palmitic (C16:0) acids, respectively. The calibration models are appropriate for the non-destructive determination of oil and fatty acids levels in whole seeds of milk thistle. © 2014 Society of Chemical Industry.

  8. Green method by diffuse reflectance infrared spectroscopy and spectral region selection for the quantification of sulphamethoxazole and trimethoprim in pharmaceutical formulations.

    PubMed

    da Silva, Fabiana E B; Flores, Érico M M; Parisotto, Graciele; Müller, Edson I; Ferrão, Marco F

    2016-03-01

    An alternative method for the quantification of sulphametoxazole (SMZ) and trimethoprim (TMP) using diffuse reflectance infrared Fourier-transform spectroscopy (DRIFTS) and partial least square regression (PLS) was developed. Interval Partial Least Square (iPLS) and Synergy Partial Least Square (siPLS) were applied to select a spectral range that provided the lowest prediction error in comparison to the full-spectrum model. Fifteen commercial tablet formulations and forty-nine synthetic samples were used. The ranges of concentration considered were 400 to 900 mg g-1SMZ and 80 to 240 mg g-1 TMP. Spectral data were recorded between 600 and 4000 cm-1 with a 4 cm-1 resolution by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The proposed procedure was compared to high performance liquid chromatography (HPLC). The results obtained from the root mean square error of prediction (RMSEP), during the validation of the models for samples of sulphamethoxazole (SMZ) and trimethoprim (TMP) using siPLS, demonstrate that this approach is a valid technique for use in quantitative analysis of pharmaceutical formulations. The selected interval algorithm allowed building regression models with minor errors when compared to the full spectrum PLS model. A RMSEP of 13.03 mg g-1for SMZ and 4.88 mg g-1 for TMP was obtained after the selection the best spectral regions by siPLS.

  9. New method for propagating the square root covariance matrix in triangular form. [using Kalman-Bucy filter

    NASA Technical Reports Server (NTRS)

    Choe, C. Y.; Tapley, B. D.

    1975-01-01

    A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.

  10. A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Cheevatanarak, Suchittra

    Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…

  11. Phase modulation for reduced vibration sensitivity in laser-cooled clocks in space

    NASA Technical Reports Server (NTRS)

    Klipstein, W.; Dick, G.; Jefferts, S.; Walls, F.

    2001-01-01

    The standard interrogation technique in atomic beam clocks is square-wave frequency modulation (SWFM), which suffers a first order sensitivity to vibrations as changes in the transit time of the atoms translates to perceived frequency errors. Square-wave phase modulation (SWPM) interrogation eliminates sensitivity to this noise.

  12. An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models

    ERIC Educational Resources Information Center

    Prindle, John J.; McArdle, John J.

    2012-01-01

    This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…

  13. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  14. Understanding Scaling Relations in Fracture and Mechanical Deformation of Single Crystal and Polycrystalline Silicon by Performing Atomistic Simulations at Mesoscale

    DTIC Science & Technology

    2009-07-16

    0.25 0.26 -0.85 1 SSR SSE R SSTO SSTO = = − 2 2 ˆ( ) : Regression sum of square, ˆwhere : mean value, : value from the fitted line ˆ...Error sum of square : Total sum of square i i i i SSR Y Y Y Y SSE Y Y SSTO SSE SSR = − = − = + ∑ ∑ Statistical analysis: Coefficient of correlation

  15. Synthesis of hover autopilots for rotary-wing VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hall, W. E.; Bryson, A. E., Jr.

    1972-01-01

    The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.

  16. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  17. Robustness study of the pseudo open-loop controller for multiconjugate adaptive optics.

    PubMed

    Piatrou, Piotr; Gilles, Luc

    2005-02-20

    Robustness of the recently proposed "pseudo open-loop control" algorithm against various system errors has been investigated for the representative example of the Gemini-South 8-m telescope multiconjugate adaptive-optics system. The existing model to represent the adaptive-optics system with pseudo open-loop control has been modified to account for misalignments, noise and calibration errors in deformable mirrors, and wave-front sensors. Comparison with the conventional least-squares control model has been done. We show with the aid of both transfer-function pole-placement analysis and Monte Carlo simulations that POLC remains remarkably stable and robust against very large levels of system errors and outperforms in this respect least-squares control. Approximate stability margins as well as performance metrics such as Strehl ratios and rms wave-front residuals averaged over a 1-arc min field of view have been computed for different types and levels of system errors to quantify the expected performance degradation.

  18. Reciprocally-Benefited Secure Transmission for Spectrum Sensing-Based Cognitive Radio Sensor Networks

    PubMed Central

    Wang, Dawei; Ren, Pinyi; Du, Qinghe; Sun, Li; Wang, Yichen

    2016-01-01

    The rapid proliferation of independently-designed and -deployed wireless sensor networks extremely crowds the wireless spectrum and promotes the emergence of cognitive radio sensor networks (CRSN). In CRSN, the sensor node (SN) can make full use of the unutilized licensed spectrum, and the spectrum efficiency is greatly improved. However, inevitable spectrum sensing errors will adversely interfere with the primary transmission, which may result in primary transmission outage. To compensate the adverse effect of spectrum sensing errors, we propose a reciprocally-benefited secure transmission strategy, in which SN’s interference to the eavesdropper is employed to protect the primary confidential messages while the CRSN is also rewarded with a loose spectrum sensing error probability constraint. Specifically, according to the spectrum sensing results and primary users’ activities, there are four system states in this strategy. For each state, we analyze the primary secrecy rate and the SN’s transmission rate by taking into account the spectrum sensing errors. Then, the SN’s transmit power is optimally allocated for each state so that the average transmission rate of CRSN is maximized under the constraint of the primary maximum permitted secrecy outage probability. In addition, the performance tradeoff between the transmission rate of CRSN and the primary secrecy outage probability is investigated. Moreover, we analyze the primary secrecy rate for the asymptotic scenarios and derive the closed-form expression of the SN’s transmission outage probability. Simulation results show that: (1) the performance of the SN’s average throughput in the proposed strategy outperforms the conventional overlay strategy; (2) both the primary network and CRSN benefit from the proposed strategy. PMID:27897988

  19. A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields

    NASA Astrophysics Data System (ADS)

    Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang

    2017-03-01

    Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.

  20. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    NASA Astrophysics Data System (ADS)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  1. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yunlong; Wang, Aiping; Guo, Lei

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  2. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  3. Credit Assignment in a Motor Decision Making Task Is Influenced by Agency and Not Sensory Prediction Errors.

    PubMed

    Parvin, Darius E; McDougle, Samuel D; Taylor, Jordan A; Ivry, Richard B

    2018-05-09

    Failures to obtain reward can occur from errors in action selection or action execution. Recently, we observed marked differences in choice behavior when the failure to obtain a reward was attributed to errors in action execution compared with errors in action selection (McDougle et al., 2016). Specifically, participants appeared to solve this credit assignment problem by discounting outcomes in which the absence of reward was attributed to errors in action execution. Building on recent evidence indicating relatively direct communication between the cerebellum and basal ganglia, we hypothesized that cerebellar-dependent sensory prediction errors (SPEs), a signal indicating execution failure, could attenuate value updating within a basal ganglia-dependent reinforcement learning system. Here we compared the SPE hypothesis to an alternative, "top-down" hypothesis in which changes in choice behavior reflect participants' sense of agency. In two experiments with male and female human participants, we manipulated the strength of SPEs, along with the participants' sense of agency in the second experiment. The results showed that, whereas the strength of SPE had no effect on choice behavior, participants were much more likely to discount the absence of rewards under conditions in which they believed the reward outcome depended on their ability to produce accurate movements. These results provide strong evidence that SPEs do not directly influence reinforcement learning. Instead, a participant's sense of agency appears to play a significant role in modulating choice behavior when unexpected outcomes can arise from errors in action execution. SIGNIFICANCE STATEMENT When learning from the outcome of actions, the brain faces a credit assignment problem: Failures of reward can be attributed to poor choice selection or poor action execution. Here, we test a specific hypothesis that execution errors are implicitly signaled by cerebellar-based sensory prediction errors. We evaluate this hypothesis and compare it with a more "top-down" hypothesis in which the modulation of choice behavior from execution errors reflects participants' sense of agency. We find that sensory prediction errors have no significant effect on reinforcement learning. Instead, instructions influencing participants' belief of causal outcomes appear to be the main factor influencing their choice behavior. Copyright © 2018 the authors 0270-6474/18/384521-10$15.00/0.

  4. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  5. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  6. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  7. Systematic changes in position sense accompany normal aging across adulthood.

    PubMed

    Herter, Troy M; Scott, Stephen H; Dukelow, Sean P

    2014-03-25

    Development of clinical neurological assessments aimed at separating normal from abnormal capabilities requires a comprehensive understanding of how basic neurological functions change (or do not change) with increasing age across adulthood. In the case of proprioception, the research literature has failed to conclusively determine whether or not position sense in the upper limb deteriorates in elderly individuals. The present study was conducted a) to quantify whether upper limb position sense deteriorates with increasing age, and b) to generate a set of normative data that can be used for future comparisons with clinical populations. We examined position sense in 209 healthy males and females between the ages of 18 and 90 using a robotic arm position-matching task that is both objective and reliable. In this task, the robot moved an arm to one of nine positions and subjects attempted to mirror-match that position with the opposite limb. Measures of position sense were recorded by the robotic apparatus in hand-and joint-based coordinates, and linear regressions were used to quantify age-related changes and percentile boundaries of normal behaviour. For clinical comparisons, we also examined influences of sex (male versus female) and test-hand (dominant versus non-dominant) on all measures of position sense. Analyses of hand-based parameters identified several measures of position sense (Variability, Shift, Spatial Contraction, Absolute Error) with significant effects of age, sex, and test-hand. Joint-based parameters at the shoulder (Absolute Error) and elbow (Variability, Shift, Absolute Error) also exhibited significant effects of age and test-hand. The present study provides strong evidence that several measures of upper extremity position sense exhibit declines with age. Furthermore, this data provides a basis for quantifying when changes in position sense are related to normal aging or alternatively, pathology.

  8. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  9. Sensing Strategies for Disambiguating among Multiple Objects in Known Poses.

    DTIC Science & Technology

    1985-08-01

    ELEMENT. PROIECT. TASK Artificial Inteligence Laboratory AE OKUI UBR 545 Technology Square Cambridge, MA 021.39 11. CONTROLLING OFFICE NAME AND ADDRESS 12...AD-Ali65 912 SENSING STRATEGIES FOR DISAMBIGURTING MONG MULTIPLE 1/1 OBJECTS IN KNOWN POSES(U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL ...or Dist Special 1 ’ MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A. I. Memo 855 August, 1985 Sensing Strategies for

  10. Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas

    USGS Publications Warehouse

    Lindgren, R.J.

    2006-01-01

    A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.

  11. Actualities and Development of Heavy-Duty CNC Machine Tool Thermal Error Monitoring Technology

    NASA Astrophysics Data System (ADS)

    Zhou, Zu-De; Gui, Lin; Tan, Yue-Gang; Liu, Ming-Yao; Liu, Yi; Li, Rui-Ya

    2017-09-01

    Thermal error monitoring technology is the key technological support to solve the thermal error problem of heavy-duty CNC (computer numerical control) machine tools. Currently, there are many review literatures introducing the thermal error research of CNC machine tools, but those mainly focus on the thermal issues in small and medium-sized CNC machine tools and seldom introduce thermal error monitoring technologies. This paper gives an overview of the research on the thermal error of CNC machine tools and emphasizes the study of thermal error of the heavy-duty CNC machine tool in three areas. These areas are the causes of thermal error of heavy-duty CNC machine tool and the issues with the temperature monitoring technology and thermal deformation monitoring technology. A new optical measurement technology called the "fiber Bragg grating (FBG) distributed sensing technology" for heavy-duty CNC machine tools is introduced in detail. This technology forms an intelligent sensing and monitoring system for heavy-duty CNC machine tools. This paper fills in the blank of this kind of review articles to guide the development of this industry field and opens up new areas of research on the heavy-duty CNC machine tool thermal error.

  12. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  13. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  14. Sensitivity of geographic information system outputs to errors in remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.

    1981-01-01

    The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.

  15. The influence of action-outcome delay and arousal on sense of agency and the intentional binding effect.

    PubMed

    Wen, Wen; Yamashita, Atsushi; Asama, Hajime

    2015-11-01

    The sense of agency refers to the feeling of being able to initiate and control events through one's actions. The "intentional binding" effect (Haggard, Clark, & Kalogeras, 2002), refers to a subjective compression of the temporal interval between actions and their effects. The present study examined the influence of action-outcome delays and arousal on both the subjective judgment of agency and the intentional binding effect. In the experiment, participants pressed a key to trigger a central square to jump after various delays. A red central square was used in the high-arousal condition. Results showed that a longer interval between actions and their effects was associated with a lower sense of agency but a stronger intentional binding effect. Furthermore, although arousal enhanced the intentional binding effect, it did not influence the judgment of agency. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Cognitive Radios Exploiting Gray Spaces via Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Wieruch, Dennis; Jung, Peter; Wirth, Thomas; Dekorsy, Armin; Haustein, Thomas

    2016-07-01

    We suggest an interweave cognitive radio system with a gray space detector, which is properly identifying a small fraction of unused resources within an active band of a primary user system like 3GPP LTE. Therefore, the gray space detector can cope with frequency fading holes and distinguish them from inactive resources. Different approaches of the gray space detector are investigated, the conventional reduced-rank least squares method as well as the compressed sensing-based orthogonal matching pursuit and basis pursuit denoising algorithm. In addition, the gray space detector is compared with the classical energy detector. Simulation results present the receiver operating characteristic at several SNRs and the detection performance over further aspects like base station system load for practical false alarm rates. The results show, that especially for practical false alarm rates the compressed sensing algorithm are more suitable than the classical energy detector and reduced-rank least squares approach.

  17. Position sense at the human elbow joint measured by arm matching or pointing.

    PubMed

    Tsay, Anthony; Allen, Trevor J; Proske, Uwe

    2016-10-01

    Position sense at the human elbow joint has traditionally been measured in blindfolded subjects using a forearm matching task. Here we compare position errors in a matching task with errors generated when the subject uses a pointer to indicate the position of a hidden arm. Evidence from muscle vibration during forearm matching supports a role for muscle spindles in position sense. We have recently shown using vibration, as well as muscle conditioning, which takes advantage of muscle's thixotropic property, that position errors generated in a forearm pointing task were not consistent with a role by muscle spindles. In the present study we have used a form of muscle conditioning, where elbow muscles are co-contracted at the test angle, to further explore differences in position sense measured by matching and pointing. For fourteen subjects, in a matching task where the reference arm had elbow flexor and extensor muscles contracted at the test angle and the indicator arm had its flexors conditioned at 90°, matching errors lay in the direction of flexion by 6.2°. After the same conditioning of the reference arm and extension conditioning of the indicator at 0°, matching errors lay in the direction of extension (5.7°). These errors were consistent with predictions based on a role by muscle spindles in determining forearm matching outcomes. In the pointing task subjects moved a pointer to align it with the perceived position of the hidden arm. After conditioning of the reference arm as before, pointing errors all lay in a more extended direction than the actual position of the arm by 2.9°-7.3°, a distribution not consistent with a role by muscle spindles. We propose that in pointing muscle spindles do not play the major role in signalling limb position that they do in matching, but that other sources of sensory input should be given consideration, including afferents from skin and joint.

  18. Investigation of adaptive filtering and MDL mitigation based on space-time block-coding for spatial division multiplexed coherent receivers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Yao, Wang; Pacheco, Michelle C.; Wang, Junyi; Pan, Zhongqi

    2017-07-01

    In this paper, we explored the performance of space-time block-coding (STBC) assisted multiple-input multiple-output (MIMO) scheme for modal dispersion and mode-dependent loss (MDL) mitigation in spatial-division multiplexed optical communication systems, whereas the weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive least squares (RLS) algorithm for convergence and channel estimation. The proposed STBC-RLS algorithm can achieve 43.6% enhancement on convergence rate over conventional least mean squares (LMS) for quadrature phase-shift keying (QPSK) signals with merely 16.2% increase in hardware complexity. The overall optical signal to noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16-quadrature amplitude modulation (QAM) and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE).

  19. Quantitative determination of additive Chlorantraniliprole in Abamectin preparation: Investigation of bootstrapping soft shrinkage approach by mid-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng

    2018-02-01

    A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.

  20. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  1. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  2. Nondestructive quantification of the soluble-solids content and the available acidity of apples by Fourier-transform near-infrared spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ying Yibin; Liu Yande; Tao Yang

    2005-09-01

    This research evaluated the feasibility of using Fourier-transform near-infrared (FT-NIR) spectroscopy to quantify the soluble-solids content (SSC) and the available acidity (VA) in intact apples. Partial least-squares calibration models, obtained from several preprocessing techniques (smoothing, derivative, etc.) in several wave-number ranges were compared. The best models were obtained with the high coefficient determination (r{sup 2}) 0.940 for the SSC and a moderate r{sup 2} of 0.801 for the VA, root-mean-square errors of prediction of 0.272% and 0.053%, and root-mean-square errors of calibration of 0.261% and 0.046%, respectively. The results indicate that the FT-NIR spectroscopy yields good predictions of the SSCmore » and also showed the feasibility of using it to predict the VA of apples.« less

  3. Validation of Satellite Precipitation (trmm 3B43) in Ecuadorian Coastal Plains, Andean Highlands and Amazonian Rainforest

    NASA Astrophysics Data System (ADS)

    Ballari, D.; Castro, E.; Campozano, L.

    2016-06-01

    Precipitation monitoring is of utmost importance for water resource management. However, in regions of complex terrain such as Ecuador, the high spatio-temporal precipitation variability and the scarcity of rain gauges, make difficult to obtain accurate estimations of precipitation. Remotely sensed estimated precipitation, such as the Multi-satellite Precipitation Analysis TRMM, can cope with this problem after a validation process, which must be representative in space and time. In this work we validate monthly estimates from TRMM 3B43 satellite precipitation (0.25° x 0.25° resolution), by using ground data from 14 rain gauges in Ecuador. The stations are located in the 3 most differentiated regions of the country: the Pacific coastal plains, the Andean highlands, and the Amazon rainforest. Time series, between 1998 - 2010, of imagery and rain gauges were compared using statistical error metrics such as bias, root mean square error, and Pearson correlation; and with detection indexes such as probability of detection, equitable threat score, false alarm rate and frequency bias index. The results showed that precipitation seasonality is well represented and TRMM 3B43 acceptably estimates the monthly precipitation in the three regions of the country. According to both, statistical error metrics and detection indexes, the coastal and Amazon regions are better estimated quantitatively than the Andean highlands. Additionally, it was found that there are better estimations for light precipitation rates. The present validation of TRMM 3B43 provides important results to support further studies on calibration and bias correction of precipitation in ungagged watershed basins.

  4. Retrieving the Polar Mixed-Phase Cloud Liquid Water Path by Combining CALIOP and IIR Measurements

    NASA Astrophysics Data System (ADS)

    Luo, Tao; Wang, Zhien; Li, Xuebin; Deng, Shumei; Huang, Yong; Wang, Yingjian

    2018-02-01

    Mixed-phase cloud (MC) is the dominant cloud type over the polar region, and there are challenging conditions for remote sensing and in situ measurements. In this study, a new methodology of retrieving the stratiform MC liquid water path (LWP) by combining Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and infrared imaging radiometer (IIR) measurements was developed and evaluated. This new methodology takes the advantage of reliable cloud-phase discrimination by combining lidar and radar measurements. An improved multiple-scattering effect correction method for lidar signals was implemented to provide reliable cloud extinction near cloud top. Then with the adiabatic cloud assumption, the MC LWP can be retrieved by a lookup-table-based method. Simulations with error-free inputs showed that the mean bias and the root mean squared error of the LWP derived from the new method are -0.23 ± 2.63 g/m2, with the mean absolute relative error of 4%. Simulations with erroneous inputs suggested that the new methodology could provide reliable retrieval of LWP to support the statistical or climatology analysis. Two-month A-train satellite retrievals over Arctic region showed that the new method can produce very similar cloud top temperature (CTT) dependence of LWP to the ground-based microwave radiometer measurements, with a bias of -0.78 g/m2 and a correlation coefficient of 0.95 between the two mean CTT-LWP relationships. The new approach can also produce reasonable pattern and value of LWP in spatial distribution over the Arctic region.

  5. Unmanned aircraft systems image collection and computer vision image processing for surveying and mapping that meets professional needs

    NASA Astrophysics Data System (ADS)

    Peterson, James Preston, II

    Unmanned Aerial Systems (UAS) are rapidly blurring the lines between traditional and close range photogrammetry, and between surveying and photogrammetry. UAS are providing an economic platform for performing aerial surveying on small projects. The focus of this research was to describe traditional photogrammetric imagery and Light Detection and Ranging (LiDAR) geospatial products, describe close range photogrammetry (CRP), introduce UAS and computer vision (CV), and investigate whether industry mapping standards for accuracy can be met using UAS collection and CV processing. A 120-acre site was selected and 97 aerial targets were surveyed for evaluation purposes. Four UAS flights of varying heights above ground level (AGL) were executed, and three different target patterns of varying distances between targets were analyzed for compliance with American Society for Photogrammetry and Remote Sensing (ASPRS) and National Standard for Spatial Data Accuracy (NSSDA) mapping standards. This analysis resulted in twelve datasets. Error patterns were evaluated and reasons for these errors were determined. The relationship between the AGL, ground sample distance, target spacing and the root mean square error of the targets is exploited by this research to develop guidelines that use the ASPRS and NSSDA map standard as the template. These guidelines allow the user to select the desired mapping accuracy and determine what target spacing and AGL is required to produce the desired accuracy. These guidelines also address how UAS/CV phenomena affect map accuracy. General guidelines and recommendations are presented that give the user helpful information for planning a UAS flight using CV technology.

  6. Combating speckle in SAR images - Vector filtering and sequential classification based on a multiplicative noise model

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Allebach, Jan P.

    1990-01-01

    An adaptive vector linear minimum mean-squared error (LMMSE) filter for multichannel images with multiplicative noise is presented. It is shown theoretically that the mean-squared error in the filter output is reduced by making use of the correlation between image bands. The vector and conventional scalar LMMSE filters are applied to a three-band SIR-B SAR, and their performance is compared. Based on a mutliplicative noise model, the per-pel maximum likelihood classifier was derived. The authors extend this to the design of sequential and robust classifiers. These classifiers are also applied to the three-band SIR-B SAR image.

  7. An empirical Bayes approach for the Poisson life distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1973-01-01

    A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.

  8. Reliable and accurate extraction of Hamaker constants from surface force measurements.

    PubMed

    Miklavcic, S J

    2018-08-15

    A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. [Fractional vegetation cover of invasive Spartina alterniflora in coastal wetland using unmanned aerial vehicle (UAV)remote sensing].

    PubMed

    Zhou, Zai Ming; Yang, Yan Ming; Chen, Ben Qing

    2016-12-01

    The effective management and utilization of resources and ecological environment of coastal wetland require investigation and analysis in high precision of the fractional vegetation cover of invasive species Spartina alterniflora. In this study, Sansha Bay was selected as the experimental region, and visible and multi-spectral images obtained by low-altitude UAV in the region were used to monitor the fractional vegetation cover of S. alterniflora. Fractional vegetation cover parameters in the multi-spectral images were then estimated by NDVI index model, and the accuracy was tested against visible images as references. Results showed that vegetation covers of S. alterniflora in the image area were mainly at medium high level (40%-60%) and high level (60%-80%). Root mean square error (RMSE) between the NDVI model estimation values and true values was 0.06, while the determination coefficient R 2 was 0.92, indicating a good consistency between the estimation value and the true value.

  10. Estimation of proportions in mixed pixels through their region characterization

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B. (Principal Investigator)

    1981-01-01

    A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.

  11. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  12. Prolongation of SMAP to Spatiotemporally Seamless Coverage of Continental U.S. Using a Deep Learning Neural Network

    NASA Astrophysics Data System (ADS)

    Fang, Kuai; Shen, Chaopeng; Kifer, Daniel; Yang, Xiao

    2017-11-01

    The Soil Moisture Active Passive (SMAP) mission has delivered valuable sensing of surface soil moisture since 2015. However, it has a short time span and irregular revisit schedules. Utilizing a state-of-the-art time series deep learning neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 moisture product with atmospheric forcings, model-simulated moisture, and static physiographic attributes as inputs. The system removes most of the bias with model simulations and improves predicted moisture climatology, achieving small test root-mean-square errors (<0.035) and high-correlation coefficients >0.87 for over 75% of Continental United States, including the forested southeast. As the first application of LSTM in hydrology, we show the proposed network avoids overfitting and is robust for both temporal and spatial extrapolation tests. LSTM generalizes well across regions with distinct climates and environmental settings. With high fidelity to SMAP, LSTM shows great potential for hindcasting, data assimilation, and weather forecasting.

  13. A high-resolution line sensor-based photostereometric system for measuring jaw movements in 6 degrees of freedom.

    PubMed

    Hayashi, T; Kurokawa, M; Miyakawa, M; Aizawa, T; Kanaki, A; Saitoh, A; Ishioka, K

    1994-01-01

    Photostereometry has widely been applied to the measurement of mandibular movements in 6 degrees of freedom. In order to improve the accuracy of this measurement, we developed a system utilizing small LEDs mounted on the jaws in redundant numbers and a 5000 pixel linear charge-coupled device (CCD) as a photo-sensor. A total of eight LEDs are mounted on the jaws, in two sets of four, by means of connecting facebows, each weighing approximately 55 g. The position of the LEDs are detected in three-dimensions by two sets of three CCD cameras, located bilaterally. The position and orientation of the mandible are estimated from the positions of all LEDs measured in the sense of least-squares, thereby effectively reducing the measurement errors. The static overall accuracy at all tooth and condylar points was considered to lie within 0.19 and 0.34 mm, respectively, from various accuracy verification tests.

  14. Delay-distribution-dependent H∞ state estimation for delayed neural networks with (x,v)-dependent noises and fading channels.

    PubMed

    Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E

    2016-12-01

    This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Tracking an Oil Tanker Collision and Spilled Oils in the East China Sea Using Multisensor Day and Night Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Shaojie; Lu, Yingcheng; Liu, Yongxue; Wang, Mengqiu; Hu, Chuanmin

    2018-04-01

    Satellite remote sensing is well known to play a critical role in monitoring marine accidents such as oil spills, yet the recent SANCHI oil tanker collision event in January 2018 in the East China Sea indicates that traditional techniques using synthetic aperture radar or daytime optical imagery could not provide timely and adequate coverage. In this study, we show the unprecedented value of Visible Infrared Imaging Radiometer Suite (VIIRS) Nightfire product and Day/Night Band data in tracking the oil tanker's drifting pathway and locations when all other means are not as effective for the same purpose. Such pathway and locations can also be reproduced with a numerical model, with root-mean-square error of <15 km. While high-resolution optical imagery after 4 days of the tanker's sinking reveals much larger oil spill area (>350 km2) than previous reports, the impact of the spilled condensate oil on the marine environment requires further research.

  16. Aerodynamic influence coefficient method using singularity splines

    NASA Technical Reports Server (NTRS)

    Mercer, J. E.; Weber, J. A.; Lesferd, E. P.

    1974-01-01

    A numerical lifting surface formulation, including computed results for planar wing cases is presented. This formulation, referred to as the vortex spline scheme, combines the adaptability to complex shapes offered by paneling schemes with the smoothness and accuracy of loading function methods. The formulation employes a continuous distribution of singularity strength over a set of panels on a paneled wing. The basic distributions are independent, and each satisfied all the continuity conditions required of the final solution. These distributions are overlapped both spanwise and chordwise. Boundary conditions are satisfied in a least square error sense over the surface using a finite summing technique to approximate the integral. The current formulation uses the elementary horseshoe vortex as the basic singularity and is therefore restricted to linearized potential flow. As part of the study, a non planar development was considered, but the numerical evaluation of the lifting surface concept was restricted to planar configurations. Also, a second order sideslip analysis based on an asymptotic expansion was investigated using the singularity spline formulation.

  17. a Comprehensive Review of Pansharpening Algorithms for GÖKTÜRK-2 Satellite Images

    NASA Astrophysics Data System (ADS)

    Kahraman, S.; Ertürk, A.

    2017-11-01

    In this paper, a comprehensive review and performance evaluation of pansharpening algorithms for GÖKTÜRK-2 images is presented. GÖKTÜRK-2 is the first high resolution remote sensing satellite of Turkey which was designed and built in Turkey, by The Ministry of Defence, TUBITAK-UZAY and Turkish Aerospace Industry (TUSAŞ) collectively. GÖKTÜRK-2 was launched at 18th. December 2012 in Jinguan, China and provides 2.5 meter panchromatic (PAN) and 5 meter multispectral (MS) spatial resolution satellite images. In this study, a large number of pansharpening algorithms are implemented and evaluated for performance on multiple GÖKTÜRK-2 satellite images. Quality assessments are conducted both qualitatively through visual results and quantitatively using Root Mean Square Error (RMSE), Correlation Coefficient (CC), Spectral Angle Mapper (SAM), Erreur Relative Globale Adimensionnelle de Synthése (ERGAS), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) and Universal Image Quality Index (UIQI).

  18. Updating Landsat-derived land-cover maps using change detection and masking techniques

    NASA Technical Reports Server (NTRS)

    Likens, W.; Maw, K.

    1982-01-01

    The California Integrated Remote Sensing System's San Bernardino County Project was devised to study the utilization of a data base at a number of jurisdictional levels. The present paper discusses the implementation of change-detection and masking techniques in the updating of Landsat-derived land-cover maps. A baseline landcover classification was first created from a 1976 image, then the adjusted 1976 image was compared with a 1979 scene by the techniques of (1) multidate image classification, (2) difference image-distribution tails thresholding, (3) difference image classification, and (4) multi-dimensional chi-square analysis of a difference image. The union of the results of methods 1, 3 and 4 was used to create a mask of possible change areas between 1976 and 1979, which served to limit analysis of the update image and reduce comparison errors in unchanged areas. The techniques of spatial smoothing of change-detection products, and of combining results of difference change-detection algorithms are also shown to improve Landsat change-detection accuracies.

  19. MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks

    NASA Astrophysics Data System (ADS)

    Vahidi, Vahid; Saberinia, Ebrahim

    2018-01-01

    A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.

  20. Noninvasive in vivo glucose sensing using an iris based technique

    NASA Astrophysics Data System (ADS)

    Webb, Anthony J.; Cameron, Brent D.

    2011-03-01

    Physiological glucose monitoring is important aspect in the treatment of individuals afflicted with diabetes mellitus. Although invasive techniques for glucose monitoring are widely available, it would be very beneficial to make such measurements in a noninvasive manner. In this study, a New Zealand White (NZW) rabbit animal model was utilized to evaluate a developed iris-based imaging technique for the in vivo measurement of physiological glucose concentration. The animals were anesthetized with isoflurane and an insulin/dextrose protocol was used to control blood glucose concentration. To further help restrict eye movement, a developed ocular fixation device was used. During the experimental time frame, near infrared illuminated iris images were acquired along with corresponding discrete blood glucose measurements taken with a handheld glucometer. Calibration was performed using an image based Partial Least Squares (PLS) technique. Independent validation was also performed to assess model performance along with Clarke Error Grid Analysis (CEGA). Initial validation results were promising and show that a high percentage of the predicted glucose concentrations are within 20% of the reference values.

  1. Cuff-less blood pressure measurement using pulse arrival time and a Kalman filter

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Chen, Xianxiang; Fang, Zhen; Xue, Yongjiao; Zhan, Qingyuan; Yang, Ting; Xia, Shanhong

    2017-02-01

    The present study designs an algorithm to increase the accuracy of continuous blood pressure (BP) estimation. Pulse arrival time (PAT) has been widely used for continuous BP estimation. However, because of motion artifact and physiological activities, PAT-based methods are often troubled with low BP estimation accuracy. This paper used a signal quality modified Kalman filter to track blood pressure changes. A Kalman filter guarantees that BP estimation value is optimal in the sense of minimizing the mean square error. We propose a joint signal quality indice to adjust the measurement noise covariance, pushing the Kalman filter to weigh more heavily on measurements from cleaner data. Twenty 2 h physiological data segments selected from the MIMIC II database were used to evaluate the performance. Compared with straightforward use of the PAT-based linear regression model, the proposed model achieved higher measurement accuracy. Due to low computation complexity, the proposed algorithm can be easily transplanted into wearable sensor devices.

  2. [Gaussian process regression and its application in near-infrared spectroscopy analysis].

    PubMed

    Feng, Ai-Ming; Fang, Li-Min; Lin, Min

    2011-06-01

    Gaussian process (GP) is applied in the present paper as a chemometric method to explore the complicated relationship between the near infrared (NIR) spectra and ingredients. After the outliers were detected by Monte Carlo cross validation (MCCV) method and removed from dataset, different preprocessing methods, such as multiplicative scatter correction (MSC), smoothing and derivate, were tried for the best performance of the models. Furthermore, uninformative variable elimination (UVE) was introduced as a variable selection technique and the characteristic wavelengths obtained were further employed as input for modeling. A public dataset with 80 NIR spectra of corn was introduced as an example for evaluating the new algorithm. The optimal models for oil, starch and protein were obtained by the GP regression method. The performance of the final models were evaluated according to the root mean square error of calibration (RMSEC), root mean square error of cross-validation (RMSECV), root mean square error of prediction (RMSEP) and correlation coefficient (r). The models give good calibration ability with r values above 0.99 and the prediction ability is also satisfactory with r values higher than 0.96. The overall results demonstrate that GP algorithm is an effective chemometric method and is promising for the NIR analysis.

  3. Optimal least-squares finite element method for elliptic problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Povinelli, Louis A.

    1991-01-01

    An optimal least squares finite element method is proposed for two dimensional and three dimensional elliptic problems and its advantages are discussed over the mixed Galerkin method and the usual least squares finite element method. In the usual least squares finite element method, the second order equation (-Delta x (Delta u) + u = f) is recast as a first order system (-Delta x p + u = f, Delta u - p = 0). The error analysis and numerical experiment show that, in this usual least squares finite element method, the rate of convergence for flux p is one order lower than optimal. In order to get an optimal least squares method, the irrotationality Delta x p = 0 should be included in the first order system.

  4. Performance of the S - [chi][squared] Statistic for Full-Information Bifactor Models

    ERIC Educational Resources Information Center

    Li, Ying; Rupp, Andre A.

    2011-01-01

    This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…

  5. F-Test Alternatives to Fisher's Exact Test and to the Chi-Square Test of Homogeneity in 2x2 Tables.

    ERIC Educational Resources Information Center

    Overall, John E.; Starbuck, Robert R.

    1983-01-01

    An alternative to Fisher's exact test and the chi-square test for homogeneity in two-by-two tables is developed. The method provides for Type I error rates which are closer to the stated alpha level than either of the alternatives. (JKS)

  6. Multilevel Modeling and Ordinary Least Squares Regression: How Comparable Are They?

    ERIC Educational Resources Information Center

    Huang, Francis L.

    2018-01-01

    Studies analyzing clustered data sets using both multilevel models (MLMs) and ordinary least squares (OLS) regression have generally concluded that resulting point estimates, but not the standard errors, are comparable with each other. However, the accuracy of the estimates of OLS models is important to consider, as several alternative techniques…

  7. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  8. Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun

    This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.

  9. Active microwave remote sensing of oceans, chapter 3

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A rationale is developed for the use of active microwave sensing in future aerospace applications programs for the remote sensing of the world's oceans, lakes, and polar regions. Summaries pertaining to applications, local phenomena, and large-scale phenomena are given along with a discussion of orbital errors.

  10. Hypoglycemia early alarm systems based on recursive autoregressive partial least squares models.

    PubMed

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. © 2012 Diabetes Technology Society.

  11. Hypoglycemia Early Alarm Systems Based on Recursive Autoregressive Partial Least Squares Models

    PubMed Central

    Bayrak, Elif Seyma; Turksoy, Kamuran; Cinar, Ali; Quinn, Lauretta; Littlejohn, Elizabeth; Rollins, Derrick

    2013-01-01

    Background Hypoglycemia caused by intensive insulin therapy is a major challenge for artificial pancreas systems. Early detection and prevention of potential hypoglycemia are essential for the acceptance of fully automated artificial pancreas systems. Many of the proposed alarm systems are based on interpretation of recent values or trends in glucose values. In the present study, subject-specific linear models are introduced to capture glucose variations and predict future blood glucose concentrations. These models can be used in early alarm systems of potential hypoglycemia. Methods A recursive autoregressive partial least squares (RARPLS) algorithm is used to model the continuous glucose monitoring sensor data and predict future glucose concentrations for use in hypoglycemia alarm systems. The partial least squares models constructed are updated recursively at each sampling step with a moving window. An early hypoglycemia alarm algorithm using these models is proposed and evaluated. Results Glucose prediction models based on real-time filtered data has a root mean squared error of 7.79 and a sum of squares of glucose prediction error of 7.35% for six-step-ahead (30 min) glucose predictions. The early alarm systems based on RARPLS shows good performance. A sensitivity of 86% and a false alarm rate of 0.42 false positive/day are obtained for the early alarm system based on six-step-ahead predicted glucose values with an average early detection time of 25.25 min. Conclusions The RARPLS models developed provide satisfactory glucose prediction with relatively smaller error than other proposed algorithms and are good candidates to forecast and warn about potential hypoglycemia unless preventive action is taken far in advance. PMID:23439179

  12. Monitoring spatial variations in soil organic carbon using remote sensing and geographic information systems

    NASA Astrophysics Data System (ADS)

    Jaber, Salahuddin M.

    Soil organic carbon (SOC) sequestration is a component of larger strategies to control the accumulation of greenhouse gases that may be causing global warming. To implement this approach, it is necessary to improve the methods of measuring SOC content. Among these methods are indirect remote sensing and geographic information systems (GIS) techniques that are required to provide non-intrusive, low cost, and spatially continuous information that cover large areas on a repetitive basis. The main goal of this study is to evaluate the effects of using Hyperion hyperspectral data on improving the existing remote sensing and GIS-based methodologies for rapidly, efficiently, and accurately measuring SOC content on farmland. The study area is Big Creek Watershed (BCW) in Southern Illinois. The methodology consists of compiling a GIS database (consisting of remote sensing and soil variables) for 303 composite soil samples collected from representative pixels along the Hyperion coverage area of the watershed. Stepwise procedures were used to calibrate and validate linear multiple regression models where SOC was regarded as the response and the other remote sensing and soil variables as the predictors. Two models were selected. The first was the best all variables model and the second was the best only raster variables model. Map algebra was implemented to extrapolate the best only raster variables model and produce a SOC map for the BGW. This study concluded that Hyperion data marginally improved the predictability of the existing SOC statistical models based on multispectral satellite remote sensing sensors with correlation coefficient of 0.37 and root mean square error of 3.19 metric tons/hectare to a 15-cm depth. The total SOC pool of the study area is about 225,232 metric tons to 15-cm depth. The nonforested wetlands contained the highest SOC density (34.3 metric tons/hectare/15cm) with total SOC content of about 2,003.5 metric tons to 15-cm depth, where croplands had the lowest SOC density (21.6 metric tons/hectare/15cm) with total SOC content of about 44,571.2 metric tons to 15-cm depth.

  13. Predicting tropical plant physiology from leaf and canopy spectroscopy

    NASA Astrophysics Data System (ADS)

    Doughty, C.; Asner, G. P.; Martin, R.

    2009-12-01

    A broad understanding of tropical forest leaf photosynthesis has long been a goal for tropical forest ecologists, but elusive, due to difficult canopy access and great species diversity. In this paper, we develop an empirical model to predict light saturated sunlit tropical leaf photosynthesis based on leaf and canopy spectra with the goal of developing a high resolution remote sensing technique to measure canopy photosynthesis. To develop this model, we used the partial least squares (PLS) regression technique on three tropical forest datasets (~168 species), two in Hawaii and one in the tropical rainforest module of Biosphere 2 (B2L). For each species, we measured light saturated photosynthesis (A), light and CO2 saturated photosynthesis (Amax), day respiration (R), leaf spectra (400-2500 nm with 1 nm sampling), leaf nitrogen (N), chlorophyll A and B, carotenoids, and specific leaf area (SLA). On a subset of species we measured Jmax and Vcmax based on light and Aci curves. The model best predicted A (r2 = 0.74, root mean square error (RMSE) = 2.85 µmol m-2 s-1), R (r2 of 0.48, RMSE of -0.52 µmol m-2 s-1) followed by Amax (r2 of 0.47, RMSE of 5.1 µmol m-2 s-1), Jmax, (R2 = 0.52, RMSE = 39) and VCmax (R2 = 0.39, RMSE = 36). The PLS weightings, which indicate which wavelengths most contribute to the model, indicated that physiology weightings were most similar to nitrogen weightings, followed by chlorophyll and SLA. We combined leaf-level reflectance and transmittance with a canopy radiative transfer model to simulate top-of-canopy reflectance, and found that canopy spectra are a better predictor of light saturated photosynthesis more strongly (RMSE = 2.4 µmol m-2 s-1) than are leaf spectra (RMSE = 2.85 µmol m-2 s-1). The results suggest that there is potential for this technique to be used with high fidelity imaging spectrometers to remotely sense tropical forest canopy photosynthesis.

  14. Modeling soil parameters using hyperspectral image reflectance in subtropical coastal wetlands

    NASA Astrophysics Data System (ADS)

    Anne, Naveen J. P.; Abd-Elrahman, Amr H.; Lewis, David B.; Hewitt, Nicole A.

    2014-12-01

    Developing spectral models of soil properties is an important frontier in remote sensing and soil science. Several studies have focused on modeling soil properties such as total pools of soil organic matter and carbon in bare soils. We extended this effort to model soil parameters in areas densely covered with coastal vegetation. Moreover, we investigated soil properties indicative of soil functions such as nutrient and organic matter turnover and storage. These properties include the partitioning of mineral and organic soil between particulate (>53 μm) and fine size classes, and the partitioning of soil carbon and nitrogen pools between stable and labile fractions. Soil samples were obtained from Avicennia germinans mangrove forest and Juncus roemerianus salt marsh plots on the west coast of central Florida. Spectra corresponding to field plot locations from Hyperion hyperspectral image were extracted and analyzed. The spectral information was regressed against the soil variables to determine the best single bands and optimal band combinations for the simple ratio (SR) and normalized difference index (NDI) indices. The regression analysis yielded levels of correlation for soil variables with R2 values ranging from 0.21 to 0.47 for best individual bands, 0.28 to 0.81 for two-band indices, and 0.53 to 0.96 for partial least-squares (PLS) regressions for the Hyperion image data. Spectral models using Hyperion data adequately (RPD > 1.4) predicted particulate organic matter (POM), silt + clay, labile carbon (C), and labile nitrogen (N) (where RPD = ratio of standard deviation to root mean square error of cross-validation [RMSECV]). The SR (0.53 μm, 2.11 μm) model of labile N with R2 = 0.81, RMSECV= 0.28, and RPD = 1.94 produced the best results in this study. Our results provide optimism that remote-sensing spectral models can successfully predict soil properties indicative of ecosystem nutrient and organic matter turnover and storage, and do so in areas with dense canopy cover.

  15. Human sense utilization method on real-time computer graphics

    NASA Astrophysics Data System (ADS)

    Maehara, Hideaki; Ohgashi, Hitoshi; Hirata, Takao

    1997-06-01

    We are developing an adjustment method of real-time computer graphics, to obtain effective ones which give audience various senses intended by producer, utilizing human sensibility technologically. Generally, production of real-time computer graphics needs much adjustment of various parameters, such as 3D object models/their motions/attributes/view angle/parallax etc., in order that the graphics gives audience superior effects as reality of materials, sense of experience and so on. And it is also known it costs much to adjust such various parameters by trial and error. A graphics producer often evaluates his graphics to improve it. For example, it may lack 'sense of speed' or be necessary to be given more 'sense of settle down,' to improve it. On the other hand, we can know how the parameters in computer graphics affect such senses by means of statistically analyzing several samples of computer graphics which provide different senses. We paid attention to these two facts, so that we designed an adjustment method of the parameters by inputting phases of sense into a computer. By the way of using this method, it becomes possible to adjust real-time computer graphics more effectively than by conventional way of trial and error.

  16. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  17. Selective logging in the Brazilian Amazon.

    Treesearch

    G. P. Asner; D. E. Knapp; E. N. Broadbent; P. J. C. Oliveira; M Keller; J. N. Silva

    2005-01-01

    Amazon deforestation has been measured by remote sensing for three decades. In comparison, selective logging has been mostly invisible to satellites. We developed a large-scale, high-resolution, automated remote-sensing analysis of selective logging in the top five timber-producing states of the Brazilian Amazon. Logged areas ranged from 12,075 to 19,823 square...

  18. Analysis of Stress Distributions Under Lightweight Wheeled Vehicles

    DTIC Science & Technology

    2013-10-09

    For a balanced analysis it is important at examine the full scale error ε f . Sinkage error, although large in a relative sense is typically on the...director of the Edgerton Center at MIT, to Thuan Doan, and to Meccanotecnica Riesi SRL for collaborating on manufacturing the custom sensing array...a pulling/braking force at the vehicle axle. Fx = T −Rc (33) The importance of drawbar force is obvious, since a positive drawbar force implies that

  19. Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations (PDMMA-USESGO) for Hydrological Modeling — A Case Study over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Hsu, K. L.; Sorooshian, S.; Xu, X.

    2017-12-01

    Precipitation in mountain regions generally occurs with high-frequency-intensity, whereas it is not well-captured by sparsely distributed rain-gauges imposing a great challenge on water management. Satellite-based Precipitation Estimation (SPE) provides global high-resolution alternative data for hydro-climatic studies, but are subject to considerable biases. In this study, a model named PDMMA-USESGO for Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations is developed to support precipitation mapping and hydrological modeling in mountainous catchments. The PDMMA-USESGO framework includes two calculating steps—adjusting SPE biases and merging satellite-gauge estimates—using the quantile mapping approach, a two-dimensional Gaussian weighting scheme (considering elevation effect), and an inverse root mean square error weighting method. The model is applied and evaluated over the Tibetan Plateau (TP) with the PERSIANN-CCS precipitation retrievals (daily, 0.04°×0.04°) and sparse observations from 89 gauges, for the 11-yr period of 2003-2013. To assess the data merging effects on streamflow modeling, a hydrological evaluation is conducted over a watershed in southeast TP based on the Soil and Water Assessment Tool (SWAT). Evaluation results indicate effectiveness of the model in generating high-resolution-accuracy precipitation estimates over mountainous terrain, with the merged estimates (Mer-SG) presenting consistently improved correlation coefficients, root mean square errors and absolute mean biases from original satellite estimates (Ori-CCS). It is found the Mer-SG forced streamflow simulations exhibit great improvements from those simulations using Ori-CCS, with coefficient of determination (R2) and Nash-Sutcliffe efficiency reach to 0.8 and 0.65, respectively. The presented model and case study serve as valuable references for the hydro-climatic applications using remote sensing-gauge information in other mountain areas of the world.

  20. Determining the Uncertainty of X-Ray Absorption Measurements

    PubMed Central

    Wojcik, Gary S.

    2004-01-01

    X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627

  1. Direct and simultaneous quantification of tannin mean degree of polymerization and percentage of galloylation in grape seeds using diffuse reflectance fourier transform-infrared spectroscopy.

    PubMed

    Pappas, Christos; Kyraleou, Maria; Voskidi, Eleni; Kotseridis, Yorgos; Taranilis, Petros A; Kallithraka, Stamatina

    2015-02-01

    The direct and simultaneous quantitative determination of the mean degree of polymerization (mDP) and the degree of galloylation (%G) in grape seeds were quantified using diffuse reflectance infrared Fourier transform spectroscopy and partial least squares (PLS). The results were compared with those obtained using the conventional analysis employing phloroglucinolysis as pretreatment followed by high performance liquid chromatography-UV and mass spectrometry detection. Infrared spectra were recorded in solid state samples after freeze drying. The 2nd derivative of the 1832 to 1416 and 918 to 739 cm(-1) spectral regions for the quantification of mDP, the 2nd derivative of the 1813 to 607 cm(-1) spectral region for the degree of %G determination and PLS regression were used. The determination coefficients (R(2) ) of mDP and %G were 0.99 and 0.98, respectively. The corresponding values of the root-mean-square error of calibration were found 0.506 and 0.692, the root-mean-square error of cross validation 0.811 and 0.921, and the root-mean-square error of prediction 0.612 and 0.801. The proposed method in comparison with the conventional method is simpler, less time consuming, more economical, and requires reduced quantities of chemical reagents and fewer sample pretreatment steps. It could be a starting point for the design of more specific models according to the requirements of the wineries. © 2015 Institute of Food Technologists®

  2. Rapid discrimination between buffalo and cow milk and detection of adulteration of buffalo milk with cow milk using synchronous fluorescence spectroscopy in combination with multivariate methods.

    PubMed

    Durakli Velioglu, Serap; Ercioglu, Elif; Boyaci, Ismail Hakki

    2017-05-01

    This research paper describes the potential of synchronous fluorescence (SF) spectroscopy for authentication of buffalo milk, a favourable raw material in the production of some premium dairy products. Buffalo milk is subjected to fraudulent activities like many other high priced foodstuffs. The current methods widely used for the detection of adulteration of buffalo milk have various disadvantages making them unattractive for routine analysis. Thus, the aim of the present study was to assess the potential of SF spectroscopy in combination with multivariate methods for rapid discrimination between buffalo and cow milk and detection of the adulteration of buffalo milk with cow milk. SF spectra of cow and buffalo milk samples were recorded between 400-550 nm excitation range with Δλ of 10-100 nm, in steps of 10 nm. The data obtained for ∆λ = 10 nm were utilised to classify the samples using principal component analysis (PCA), and detect the adulteration level of buffalo milk with cow milk using partial least square (PLS) methods. Successful discrimination of samples and detection of adulteration of buffalo milk with limit of detection value (LOD) of 6% are achieved with the models having root mean square error of calibration (RMSEC) and the root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP) values of 2, 7, and 4%, respectively. The results reveal the potential of SF spectroscopy for rapid authentication of buffalo milk.

  3. Local Linear Regression for Data with AR Errors.

    PubMed

    Li, Runze; Li, Yan

    2009-07-01

    In many statistical applications, data are collected over time, and they are likely correlated. In this paper, we investigate how to incorporate the correlation information into the local linear regression. Under the assumption that the error process is an auto-regressive process, a new estimation procedure is proposed for the nonparametric regression by using local linear regression method and the profile least squares techniques. We further propose the SCAD penalized profile least squares method to determine the order of auto-regressive process. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed procedure, and to compare the performance of the proposed procedures with the existing one. From our empirical studies, the newly proposed procedures can dramatically improve the accuracy of naive local linear regression with working-independent error structure. We illustrate the proposed methodology by an analysis of real data set.

  4. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  5. Linear regression in astronomy. II

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.; Babu, Gutti J.

    1992-01-01

    A wide variety of least-squares linear regression procedures used in observational astronomy, particularly investigations of the cosmic distance scale, are presented and discussed. The classes of linear models considered are (1) unweighted regression lines, with bootstrap and jackknife resampling; (2) regression solutions when measurement error, in one or both variables, dominates the scatter; (3) methods to apply a calibration line to new data; (4) truncated regression models, which apply to flux-limited data sets; and (5) censored regression models, which apply when nondetections are present. For the calibration problem we develop two new procedures: a formula for the intercept offset between two parallel data sets, which propagates slope errors from one regression to the other; and a generalization of the Working-Hotelling confidence bands to nonstandard least-squares lines. They can provide improved error analysis for Faber-Jackson, Tully-Fisher, and similar cosmic distance scale relations.

  6. Application of Adaptive Neuro-Fuzzy Inference System for Prediction of Neutron Yield of IR-IECF Facility in High Voltages

    NASA Astrophysics Data System (ADS)

    Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.

    2013-09-01

    This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.

  7. Spectrophotometric determination of ternary mixtures of thiamin, riboflavin and pyridoxal in pharmaceutical and human plasma by least-squares support vector machines.

    PubMed

    Niazi, Ali; Zolgharnein, Javad; Afiuni-Zadeh, Somaie

    2007-11-01

    Ternary mixtures of thiamin, riboflavin and pyridoxal have been simultaneously determined in synthetic and real samples by applications of spectrophotometric and least-squares support vector machines. The calibration graphs were linear in the ranges of 1.0 - 20.0, 1.0 - 10.0 and 1.0 - 20.0 microg ml(-1) with detection limits of 0.6, 0.5 and 0.7 microg ml(-1) for thiamin, riboflavin and pyridoxal, respectively. The experimental calibration matrix was designed with 21 mixtures of these chemicals. The concentrations were varied between calibration graph concentrations of vitamins. The simultaneous determination of these vitamin mixtures by using spectrophotometric methods is a difficult problem, due to spectral interferences. The partial least squares (PLS) modeling and least-squares support vector machines were used for the multivariate calibration of the spectrophotometric data. An excellent model was built using LS-SVM, with low prediction errors and superior performance in relation to PLS. The root mean square errors of prediction (RMSEP) for thiamin, riboflavin and pyridoxal with PLS and LS-SVM were 0.6926, 0.3755, 0.4322 and 0.0421, 0.0318, 0.0457, respectively. The proposed method was satisfactorily applied to the rapid simultaneous determination of thiamin, riboflavin and pyridoxal in commercial pharmaceutical preparations and human plasma samples.

  8. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  9. Accuracy Assessment of Digital Surface Models Based on WorldView-2 and ADS80 Stereo Remote Sensing Data

    PubMed Central

    Hobi, Martina L.; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling. PMID:22778645

  10. Accuracy assessment of digital surface models based on WorldView-2 and ADS80 stereo remote sensing data.

    PubMed

    Hobi, Martina L; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of -0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of -0.43 m for the herb and grass vegetation and -0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of -1.85 m for the WorldView-2 GCP-enhanced RPCs model and -1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling.

  11. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Bukhari, W.; Hong, S.-M.

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in percent ratios relative to no prediction for a duty cycle of 80% at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The experiments also confirm that EKF-GPR+ controls the duty cycle with reasonable accuracy.

  12. Evaluation of Satellite and Model Precipitation Products Over Turkey

    NASA Astrophysics Data System (ADS)

    Yilmaz, M. T.; Amjad, M.

    2017-12-01

    Satellite-based remote sensing, gauge stations, and models are the three major platforms to acquire precipitation dataset. Among them satellites and models have the advantage of retrieving spatially and temporally continuous and consistent datasets, while the uncertainty estimates of these retrievals are often required for many hydrological studies to understand the source and the magnitude of the uncertainty in hydrological response parameters. In this study, satellite and model precipitation data products are validated over various temporal scales (daily, 3-daily, 7-daily, 10-daily and monthly) using in-situ measured precipitation observations from a network of 733 gauges from all over the Turkey. Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 version 7 and European Center of Medium-Range Weather Forecast (ECMWF) model estimates (daily, 3-daily, 7-daily and 10-daily accumulated forecast) are used in this study. Retrievals are evaluated for their mean and standard deviation and their accuracies are evaluated via bias, root mean square error, error standard deviation and correlation coefficient statistics. Intensity vs frequency analysis and some contingency table statistics like percent correct, probability of detection, false alarm ratio and critical success index are determined using daily time-series. Both ECMWF forecasts and TRMM observations, on average, overestimate the precipitation compared to gauge estimates; wet biases are 10.26 mm/month and 8.65 mm/month, respectively for ECMWF and TRMM. RMSE values of ECMWF forecasts and TRMM estimates are 39.69 mm/month and 41.55 mm/month, respectively. Monthly correlations between Gauges-ECMWF, Gauges-TRMM and ECMWF-TRMM are 0.76, 0.73 and 0.81, respectively. The model and the satellite error statistics are further compared against the gauges error statistics based on inverse distance weighting (IWD) analysis. Both the model and satellite data have less IWD errors (14.72 mm/month and 10.75 mm/month, respectively) compared to gauges IWD error (21.58 mm/month). These results show that, on average, ECMWF forecast data have higher skill than TRMM observations. Overall, both ECMWF forecast data and TRMM observations show good potential for catchment scale hydrological analysis.

  13. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  14. Blind Compressed Sensing Enables 3-Dimensional Dynamic Free Breathing Magnetic Resonance Imaging of Lung Volumes and Diaphragm Motion.

    PubMed

    Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews

    2016-06-01

    The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.

  15. Coupling finite element and spectral methods: First results

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Debit, Naima; Maday, Yvon

    1987-01-01

    A Poisson equation on a rectangular domain is solved by coupling two methods: the domain is divided in two squares, a finite element approximation is used on the first square and a spectral discretization is used on the second one. Two kinds of matching conditions on the interface are presented and compared. In both cases, error estimates are proved.

  16. Calibrationless parallel magnetic resonance imaging: a joint sparsity model.

    PubMed

    Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab

    2013-12-05

    State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  17. Application of a dual-resolution voxelization scheme to compressed-sensing (CS)-based iterative reconstruction in digital tomosynthesis (DTS)

    NASA Astrophysics Data System (ADS)

    Park, S. Y.; Kim, G. A.; Cho, H. S.; Park, C. K.; Lee, D. Y.; Lim, H. W.; Lee, H. W.; Kim, K. S.; Kang, S. Y.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Je, U. K.; Woo, T. H.; Oh, J. E.

    2018-02-01

    In recent digital tomosynthesis (DTS), iterative reconstruction methods are often used owing to the potential to provide multiplanar images of superior image quality to conventional filtered-backprojection (FBP)-based methods. However, they require enormous computational cost in the iterative process, which has still been an obstacle to put them to practical use. In this work, we propose a new DTS reconstruction method incorporated with a dual-resolution voxelization scheme in attempt to overcome these difficulties, in which the voxels outside a small region-of-interest (ROI) containing target diagnosis are binned by 2 × 2 × 2 while the voxels inside the ROI remain unbinned. We considered a compressed-sensing (CS)-based iterative algorithm with a dual-constraint strategy for more accurate DTS reconstruction. We implemented the proposed algorithm and performed a systematic simulation and experiment to demonstrate its viability. Our results indicate that the proposed method seems to be effective for reducing computational cost considerably in iterative DTS reconstruction, keeping the image quality inside the ROI not much degraded. A binning size of 2 × 2 × 2 required only about 31.9% computational memory and about 2.6% reconstruction time, compared to those for no binning case. The reconstruction quality was evaluated in terms of the root-mean-square error (RMSE), the contrast-to-noise ratio (CNR), and the universal-quality index (UQI).

  18. Accelerated T1ρ acquisition for knee cartilage quantification using compressed sensing and data-driven parallel imaging: A feasibility study.

    PubMed

    Pandit, Prachi; Rivoire, Julien; King, Kevin; Li, Xiaojuan

    2016-03-01

    Quantitative T1ρ imaging is beneficial for early detection for osteoarthritis but has seen limited clinical use due to long scan times. In this study, we evaluated the feasibility of accelerated T1ρ mapping for knee cartilage quantification using a combination of compressed sensing (CS) and data-driven parallel imaging (ARC-Autocalibrating Reconstruction for Cartesian sampling). A sequential combination of ARC and CS, both during data acquisition and reconstruction, was used to accelerate the acquisition of T1ρ maps. Phantom, ex vivo (porcine knee), and in vivo (human knee) imaging was performed on a GE 3T MR750 scanner. T1ρ quantification after CS-accelerated acquisition was compared with non CS-accelerated acquisition for various cartilage compartments. Accelerating image acquisition using CS did not introduce major deviations in quantification. The coefficient of variation for the root mean squared error increased with increasing acceleration, but for in vivo measurements, it stayed under 5% for a net acceleration factor up to 2, where the acquisition was 25% faster than the reference (only ARC). To the best of our knowledge, this is the first implementation of CS for in vivo T1ρ quantification. These early results show that this technique holds great promise in making quantitative imaging techniques more accessible for clinical applications. © 2015 Wiley Periodicals, Inc.

  19. Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2017-10-01

    Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.

  20. An Improved STARFM with Help of an Unmixing-Based Method to Generate High Spatial and Temporal Resolution Remote Sensing Data in Complex Heterogeneous Regions.

    PubMed

    Xie, Dengfeng; Zhang, Jinshui; Zhu, Xiufang; Pan, Yaozhong; Liu, Hongli; Yuan, Zhoumiqi; Yun, Ya

    2016-02-05

    Remote sensing technology plays an important role in monitoring rapid changes of the Earth's surface. However, sensors that can simultaneously provide satellite images with both high temporal and spatial resolution haven't been designed yet. This paper proposes an improved spatial and temporal adaptive reflectance fusion model (STARFM) with the help of an Unmixing-based method (USTARFM) to generate the high spatial and temporal data needed for the study of heterogeneous areas. The results showed that the USTARFM had higher accuracy than STARFM methods in two aspects of analysis: individual bands and of heterogeneity analysis. Taking the predicted NIR band as an example, the correlation coefficients (r) for the USTARFM, STARFM and unmixing methods were 0.96, 0.95, 0.90, respectively (p-value < 0.001); Root Mean Square Error (RMSE) values were 0.0245, 0.0300, 0.0401, respectively; and ERGAS values were 0.5416, 0.6507, 0.8737, respectively. The USTARM showed consistently higher performance than STARM when the degree of heterogeneity ranged from 2 to 10, highlighting that the use of this method provides the capacity to solve the data fusion problems faced when using STARFM. Additionally, the USTARFM method could help researchers achieve better performance than STARFM at a smaller window size from its heterogeneous land surface quantitative representation.

  1. Motion-compensated compressed sensing for dynamic contrast-enhanced MRI using regional spatiotemporal sparsity and region tracking: Block LOw-rank Sparsity with Motion-guidance (BLOSM)

    PubMed Central

    Chen, Xiao; Salerno, Michael; Yang, Yang; Epstein, Frederick H.

    2014-01-01

    Purpose Dynamic contrast-enhanced MRI of the heart is well-suited for acceleration with compressed sensing (CS) due to its spatiotemporal sparsity; however, respiratory motion can degrade sparsity and lead to image artifacts. We sought to develop a motion-compensated CS method for this application. Methods A new method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was developed to accelerate first-pass cardiac MRI, even in the presence of respiratory motion. This method divides the images into regions, tracks the regions through time, and applies matrix low-rank sparsity to the tracked regions. BLOSM was evaluated using computer simulations and first-pass cardiac datasets from human subjects. Using rate-4 acceleration, BLOSM was compared to other CS methods such as k-t SLR that employs matrix low-rank sparsity applied to the whole image dataset, with and without motion tracking, and to k-t FOCUSS with motion estimation and compensation that employs spatial and temporal-frequency sparsity. Results BLOSM was qualitatively shown to reduce respiratory artifact compared to other methods. Quantitatively, using root mean squared error and the structural similarity index, BLOSM was superior to other methods. Conclusion BLOSM, which exploits regional low rank structure and uses region tracking for motion compensation, provides improved image quality for CS-accelerated first-pass cardiac MRI. PMID:24243528

  2. Building on crossvalidation for increasing the quality of geostatistical modeling

    USGS Publications Warehouse

    Olea, R.A.

    2012-01-01

    The random function is a mathematical model commonly used in the assessment of uncertainty associated with a spatially correlated attribute that has been partially sampled. There are multiple algorithms for modeling such random functions, all sharing the requirement of specifying various parameters that have critical influence on the results. The importance of finding ways to compare the methods and setting parameters to obtain results that better model uncertainty has increased as these algorithms have grown in number and complexity. Crossvalidation has been used in spatial statistics, mostly in kriging, for the analysis of mean square errors. An appeal of this approach is its ability to work with the same empirical sample available for running the algorithms. This paper goes beyond checking estimates by formulating a function sensitive to conditional bias. Under ideal conditions, such function turns into a straight line, which can be used as a reference for preparing measures of performance. Applied to kriging, deviations from the ideal line provide sensitivity to the semivariogram lacking in crossvalidation of kriging errors and are more sensitive to conditional bias than analyses of errors. In terms of stochastic simulation, in addition to finding better parameters, the deviations allow comparison of the realizations resulting from the applications of different methods. Examples show improvements of about 30% in the deviations and approximately 10% in the square root of mean square errors between reasonable starting modelling and the solutions according to the new criteria. ?? 2011 US Government.

  3. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  4. Investigation of Ionospheric Spatial Gradients for Gagan Error Correction

    NASA Astrophysics Data System (ADS)

    Chandra, K. Ravi

    In India, Indian Space Research Organization (ISRO) has established with an objective to develop space technology and its application to various national tasks. The national tasks include, establishment of major space systems such as Indian National Satellites (INSAT) for communication, television broadcasting and meteorological services, Indian Remote Sensing Satellites (IRS), etc. Apart from these, to cater to the needs of civil aviation applications, GPS Aided Geo Augmented Navigation (GAGAN) system is being jointly implemented along with Airports Authority of India (AAI) over the Indian region. The most predominant parameter affecting the navigation accuracy of GAGAN is ionospheric delay which is a function of total number of electrons present in one square meter cylindrical cross-sectional area in the line of site direction between the satellite and the user on the earth, i.e. Total Electron Content (TEC). In the equatorial and low latitude regions such as India, TEC is often quite high with large spatial gradients. Carrier phase data from the GAGAN network of Indian TEC Stations is used for estimating and identifying ionospheric spatial gradients inmultiple viewing directions. In this paper amongst the satellite signals arriving in multipledirections,Vertical ionospheric gradients (σVIG) are calculated, inturn spatial ionospheric gradients are identified. In addition, estimated temporal gradients, i.e. rate of TEC Index is also compared. These aspects which contribute to errors can be treated for improved GAGAN system performance.

  5. Incremental Support Vector Machine Framework for Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Awad, Mariette; Jiang, Xianhua; Motai, Yuichi

    2006-12-01

    Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  6. Analysis of Students' Error in Learning of Quadratic Equations

    ERIC Educational Resources Information Center

    Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

    2010-01-01

    The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

  7. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    PubMed

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  8. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  9. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  10. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  11. Cortical dipole imaging using truncated total least squares considering transfer matrix error.

    PubMed

    Hori, Junichi; Takeuchi, Kosuke

    2013-01-01

    Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.

  12. Spectral combination of spherical gravitational curvature boundary-value problems

    NASA Astrophysics Data System (ADS)

    PitoÅák, Martin; Eshagh, Mehdi; Šprlák, Michal; Tenzer, Robert; Novák, Pavel

    2018-04-01

    Four solutions of the spherical gravitational curvature boundary-value problems can be exploited for the determination of the Earth's gravitational potential. In this article we discuss the combination of simulated satellite gravitational curvatures, i.e., components of the third-order gravitational tensor, by merging these solutions using the spectral combination method. For this purpose, integral estimators of biased- and unbiased-types are derived. In numerical studies, we investigate the performance of the developed mathematical models for the gravitational field modelling in the area of Central Europe based on simulated satellite measurements. Firstly, we verify the correctness of the integral estimators for the spectral downward continuation by a closed-loop test. Estimated errors of the combined solution are about eight orders smaller than those from the individual solutions. Secondly, we perform a numerical experiment by considering the Gaussian noise with the standard deviation of 6.5× 10-17 m-1s-2 in the input data at the satellite altitude of 250 km above the mean Earth sphere. This value of standard deviation is equivalent to a signal-to-noise ratio of 10. Superior results with respect to the global geopotential model TIM-r5 are obtained by the spectral downward continuation of the vertical-vertical-vertical component with the standard deviation of 2.104 m2s-2, but the root mean square error is the largest and reaches 9.734 m2s-2. Using the spectral combination of all gravitational curvatures the root mean square error is more than 400 times smaller but the standard deviation reaches 17.234 m2s-2. The combination of more components decreases the root mean square error of the corresponding solutions while the standard deviations of the combined solutions do not improve as compared to the solution from the vertical-vertical-vertical component. The presented method represents a weight mean in the spectral domain that minimizes the root mean square error of the combined solutions and improves standard deviation of the solution based only on the least accurate components.

  13. Turn on ESPT: novel salicylaldehyde based sensor for biological important fluoride sensing.

    PubMed

    Liu, Kai; Zhao, Xiaojun; Liu, Qingxiang; Huo, Jianzhong; Fu, Huifang; Wang, Ying

    2014-09-05

    A novel and simple salicylaldehyde based anion fluorescent sensor 1 has been designed, which can selectively sense fluoride by 'turn on' excited-state intermolecular proton transfer (ESPT). The binding constant and the stoichiometry were obtained by non-linear least-square analysis of the titration curves. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network

    PubMed Central

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-01-01

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months. PMID:26213941

  15. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network.

    PubMed

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-07-24

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  16. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, Hong Yi; Milne, Alice; Webster, Richard

    2016-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  17. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, HongYi; Milne, Alice; Webster, Richard

    2015-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  18. An improved triple collocation algorithm for decomposing autocorrelated and white soil moisture retrieval errors

    USDA-ARS?s Scientific Manuscript database

    If not properly account for, auto-correlated errors in observations can lead to inaccurate results in soil moisture data analysis and reanalysis. Here, we propose a more generalized form of the triple collocation algorithm (GTC) capable of decomposing the total error variance of remotely-sensed surf...

  19. Evaluation of LiDAR-Acquired Bathymetric and Topographic Data Accuracy in Various Hydrogeomorphic Settings in the Lower Boise River, Southwestern Idaho, 2007

    USGS Publications Warehouse

    Skinner, Kenneth D.

    2009-01-01

    Elevation data in riverine environments can be used in various applications for which different levels of accuracy are required. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging) - or EAARL - system was used to obtain topographic and bathymetric data along the lower Boise River, southwestern Idaho, for use in hydraulic and habitat modeling. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL data collection, real-time kinetic global positioning system and total station ground-survey data were collected in three areas within the lower Boise River basin to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived elevation data, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.082 to 0.138 m. Accuracies for bank, floodplain, and in-stream bathymetric data had root mean square errors ranging from 0.090 to 0.583 m. The greater root mean square errors for the latter data are the result of high levels of turbidity in the downstream ground-survey area, dense tree canopy, and horizontal location discrepancies between the EAARL and ground-survey data in steeply sloping areas such as riverbanks. The EAARL point to ground-survey comparisons produced results similar to those for the EAARL raster to ground-survey comparisons, indicating that the interpolation of the EAARL points to rasters did not introduce significant additional error. The mean percent error for the wetted cross-sectional areas of the two upstream ground-survey areas was 1 percent. The mean percent error increases to -18 percent if the downstream ground-survey area is included, reflecting the influence of turbidity in that area.

  20. Recursive least-squares learning algorithms for neural networks

    NASA Astrophysics Data System (ADS)

    Lewis, Paul S.; Hwang, Jenq N.

    1990-11-01

    This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].

Top