Sample records for squared error mse

  1. Some Results on Mean Square Error for Factor Score Prediction

    ERIC Educational Resources Information Center

    Krijnen, Wim P.

    2006-01-01

    For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…

  2. Incorporation of prior information on parameters into nonlinear regression groundwater flow models: 2. Applications

    USGS Publications Warehouse

    Cooley, Richard L.

    1983-01-01

    This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.

  3. Joint Transmitter and Receiver Power Allocation under Minimax MSE Criterion with Perfect and Imperfect CSI for MC-CDMA Transmissions

    NASA Astrophysics Data System (ADS)

    Kotchasarn, Chirawat; Saengudomlert, Poompat

    We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.

  4. Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia

    USGS Publications Warehouse

    Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.

    2009-01-01

    Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.

  5. Weighted-MSE based on saliency map for assessing video quality of H.264 video streams

    NASA Astrophysics Data System (ADS)

    Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.

  6. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    NASA Technical Reports Server (NTRS)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  7. An experimental study of interstitial lung tissue classification in HRCT images using ANN and role of cost functions

    NASA Astrophysics Data System (ADS)

    Dash, Jatindra K.; Kale, Mandar; Mukhopadhyay, Sudipta; Khandelwal, Niranjan; Prabhakar, Nidhi; Garg, Mandeep; Kalra, Naveen

    2017-03-01

    In this paper, we investigate the effect of the error criteria used during a training phase of the artificial neural network (ANN) on the accuracy of the classifier for classification of lung tissues affected with Interstitial Lung Diseases (ILD). Mean square error (MSE) and the cross-entropy (CE) criteria are chosen being most popular choice in state-of-the-art implementations. The classification experiment performed on the six interstitial lung disease (ILD) patterns viz. Consolidation, Emphysema, Ground Glass Opacity, Micronodules, Fibrosis and Healthy from MedGIFT database. The texture features from an arbitrary region of interest (AROI) are extracted using Gabor filter. Two different neural networks are trained with the scaled conjugate gradient back propagation algorithm with MSE and CE error criteria function respectively for weight updation. Performance is evaluated in terms of average accuracy of these classifiers using 4 fold cross-validation. Each network is trained for five times for each fold with randomly initialized weight vectors and accuracies are computed. Significant improvement in classification accuracy is observed when ANN is trained by using CE (67.27%) as error function compared to MSE (63.60%). Moreover, standard deviation of the classification accuracy for the network trained with CE (6.69) error criteria is found less as compared to network trained with MSE (10.32) criteria.

  8. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  9. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  10. Comparison of Sleep Models for Score Fatigue Model Integration

    DTIC Science & Technology

    2015-04-01

    In order to obtain sleepiness, the Karolinska Sleepiness Scale (KSS) was applied using the following equation. = − ( ∗ ) (8) Where a = 10.3... Karolinska Sleepiness Scale MSE Mean Square Error St Homeostatic sleep pressure TPM Three-Process Model U Ultradian component

  11. The modelling of lead removal from water by deep eutectic solvents functionalized CNTs: artificial neural network (ANN) approach.

    PubMed

    Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed

    2017-11-01

    The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.

  12. Maritime Adaptive Optics Beam Control

    DTIC Science & Technology

    2010-09-01

    Liquid Crystal LMS Least Mean Square MIMO Multiple- Input Multiple-Output MMDM Micromachined Membrane Deformable Mirror MSE Mean Square Error...determine how the beam is distorted, a control computer to calculate the correction to be applied, and a corrective element, usually a deformable mirror ...during this research, an overview of the system modification is provided here. Using additional mirrors and reflecting the beam to and from an

  13. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    PubMed Central

    DelSole, T.; Tippett, M.K.; Pegion, K.

    2018-01-01

    Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973

  14. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    NASA Astrophysics Data System (ADS)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  15. Quantum State Tomography via Linear Regression Estimation

    PubMed Central

    Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan

    2013-01-01

    A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519

  16. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  17. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  18. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  19. Measuring Dispersion Effects of Factors in Factorial Experiments.

    DTIC Science & Technology

    1988-01-01

    error is MSE =i=l j=1 i n r (SSE/(N-p)), the sum of squares of pure error is SSPE = Z E Y i=1 j=1 and the mean square of pure error is MSPE - ( SSPE /n...the level of the factor in the ith run is 0. 3.1. First Measure We have n r n r SSPE = 1 Is it -yi) 2 + E r (1-8 )(yjj li-l j=l (iYjj +i= j=l l - i...The first component in SSPE corresponds to level I of the factor and has n degrees of freedom ( E 6i)(r-I). The second component corresponds to i=l n

  20. Sampling for mercury at subnanogram per litre concentrations for load estimation in rivers

    USGS Publications Warehouse

    Colman, J.A.; Breault, R.F.

    2000-01-01

    Estimation of constituent loads in streams requires collection of stream samples that are representative of constituent concentrations, that is, composites of isokinetic multiple verticals collected along a stream transect. An all-Teflon isokinetic sampler (DH-81) cleaned in 75??C, 4 N HCl was tested using blank, split, and replicate samples to assess systematic and random sample contamination by mercury species. Mean mercury concentrations in field-equipment blanks were low: 0.135 ng??L-1 for total mercury (??Hg) and 0.0086 ng??L-1 for monomethyl mercury (MeHg). Mean square errors (MSE) for ??Hg and MeHg duplicate samples collected at eight sampling stations were not statistically different from MSE of samples split in the laboratory, which represent the analytical and splitting error. Low fieldblank concentrations and statistically equal duplicate- and split-sample MSE values indicate that no measurable contamination was occurring during sampling. Standard deviations associated with example mercury load estimations were four to five times larger, on a relative basis, than standard deviations calculated from duplicate samples, indicating that error of the load determination was primarily a function of the loading model used, not of sampling or analytical methods.

  1. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  2. 16QAM Blind Equalization via Maximum Entropy Density Approximation Technique and Nonlinear Lagrange Multipliers

    PubMed Central

    Mauda, R.; Pinchas, M.

    2014-01-01

    Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813

  3. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  4. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  5. Optimal Bandwidth for Multitaper Spectrum Estimation

    DOE PAGES

    Haley, Charlotte L.; Anitescu, Mihai

    2017-07-04

    A systematic method for bandwidth parameter selection is desired for Thomson multitaper spectrum estimation. We give a method for determining the optimal bandwidth based on a mean squared error (MSE) criterion. When the true spectrum has a second-order Taylor series expansion, one can express quadratic local bias as a function of the curvature of the spectrum, which can be estimated by using a simple spline approximation. This is combined with a variance estimate, obtained by jackknifing over individual spectrum estimates, to produce an estimated MSE for the log spectrum estimate for each choice of time-bandwidth product. The bandwidth that minimizesmore » the estimated MSE then gives the desired spectrum estimate. Additionally, the bandwidth obtained using our method is also optimal for cepstrum estimates. We give an example of a damped oscillatory (Lorentzian) process in which the approximate optimal bandwidth can be written as a function of the damping parameter. Furthermore, the true optimal bandwidth agrees well with that given by minimizing estimated the MSE in these examples.« less

  6. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  7. Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics

    NASA Astrophysics Data System (ADS)

    Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane

    2014-10-01

    This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...

  8. Gompertzian stochastic model with delay effect to cervical cancer growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah

    2015-02-03

    In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.

  9. Multi-modulus algorithm based on global artificial fish swarm intelligent optimization of DNA encoding sequences.

    PubMed

    Guo, Y C; Wang, H; Wu, H P; Zhang, M Q

    2015-12-21

    Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.

  10. A digital clock recovery algorithm based on chromatic dispersion and polarization mode dispersion feedback dual phase detection for coherent optical transmission systems

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Xin, Xiangjun; Zhang, Lijia; Wang, Fu; Zhang, Qi

    2018-02-01

    A new feedback symbol timing recovery technique using timing estimation joint equalization is proposed for digital receivers with two samples/symbol or higher sampling rate. Different from traditional methods, the clock recovery algorithm in this paper adopts another algorithm distinguishing the phases of adjacent symbols, so as to accurately estimate the timing offset based on the adjacent signals with the same phase. The addition of the module for eliminating phase modulation interference before timing estimation further reduce the variance, thus resulting in a smoothed timing estimate. The Mean Square Error (MSE) and Bit Error Rate (BER) of the resulting timing estimate are simulated to allow a satisfactory estimation performance. The obtained clock tone performance is satisfactory for MQAM modulation formats and the Roll-off Factor (ROF) close to 0. In the back-to-back system, when ROF= 0, the maximum of MSE obtained with the proposed approach reaches 0 . 0125. After 100-km fiber transmission, BER decreases to 10-3 with ROF= 0 and OSNR = 11 dB. With the increase in ROF, the performances of MSE and BER become better.

  11. Comparison of Two Hybrid Models for Forecasting the Incidence of Hemorrhagic Fever with Renal Syndrome in Jiangsu Province, China.

    PubMed

    Wu, Wei; Guo, Junqiao; An, Shuyi; Guan, Peng; Ren, Yangwu; Xia, Linzi; Zhou, Baosen

    2015-01-01

    Cases of hemorrhagic fever with renal syndrome (HFRS) are widely distributed in eastern Asia, especially in China, Russia, and Korea. It is proved to be a difficult task to eliminate HFRS completely because of the diverse animal reservoirs and effects of global warming. Reliable forecasting is useful for the prevention and control of HFRS. Two hybrid models, one composed of nonlinear autoregressive neural network (NARNN) and autoregressive integrated moving average (ARIMA) the other composed of generalized regression neural network (GRNN) and ARIMA were constructed to predict the incidence of HFRS in the future one year. Performances of the two hybrid models were compared with ARIMA model. The ARIMA, ARIMA-NARNN ARIMA-GRNN model fitted and predicted the seasonal fluctuation well. Among the three models, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of ARIMA-NARNN hybrid model was the lowest both in modeling stage and forecasting stage. As for the ARIMA-GRNN hybrid model, the MSE, MAE and MAPE of modeling performance and the MSE and MAE of forecasting performance were less than the ARIMA model, but the MAPE of forecasting performance did not improve. Developing and applying the ARIMA-NARNN hybrid model is an effective method to make us better understand the epidemic characteristics of HFRS and could be helpful to the prevention and control of HFRS.

  12. Comparative study of four time series methods in forecasting typhoid fever incidence in China.

    PubMed

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.

  13. Comparative Study of Four Time Series Methods in Forecasting Typhoid Fever Incidence in China

    PubMed Central

    Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A.; Li, Xiaosong

    2013-01-01

    Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model. PMID:23650546

  14. Robust logistic regression to narrow down the winner's curse for rare and recessive susceptibility variants.

    PubMed

    Kesselmeier, Miriam; Lorenzo Bermejo, Justo

    2017-11-01

    Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  16. Optimal design of multichannel equalizers for the structural similarity index.

    PubMed

    Chai, Li; Sheng, Yuxia

    2014-12-01

    The optimization of multichannel equalizers is studied for the structural similarity (SSIM) criteria. The closed-form formula is provided for the optimal equalizer when the mean of the source is zero. The formula shows that the equalizer with maximal SSIM index is equal to the one with minimal mean square error (MSE) multiplied by a positive real number, which is shown to be equal to the inverse of the achieved SSIM index. The relation of the maximal SSIM index to the minimal MSE is also established for given blurring filters and fixed length equalizers. An algorithm is also presented to compute the suboptimal equalizer for the general sources. Various numerical examples are given to demonstrate the effectiveness of the results.

  17. Performance Analysis of Blind Subspace-Based Signature Estimation Algorithms for DS-CDMA Systems with Unknown Correlated Noise

    NASA Astrophysics Data System (ADS)

    Zarifi, Keyvan; Gershman, Alex B.

    2006-12-01

    We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.

  18. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  19. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  20. Nonlinear model for offline correction of pulmonary waveform generators.

    PubMed

    Reynolds, Jeffrey S; Stemple, Kimberly J; Petsko, Raymond A; Ebeling, Thomas R; Frazer, David G

    2002-12-01

    Pulmonary waveform generators consisting of motor-driven piston pumps are frequently used to test respiratory-function equipment such as spirometers and peak expiratory flow (PEF) meters. Gas compression within these generators can produce significant distortion of the output flow-time profile. A nonlinear model of the generator was developed along with a method to compensate for gas compression when testing pulmonary function equipment. The model and correction procedure were tested on an Assess Full Range PEF meter and a Micro DiaryCard PEF meter. The tests were performed using the 26 American Thoracic Society standard flow-time waveforms as the target flow profiles. Without correction, the pump loaded with the higher resistance Assess meter resulted in ten waveforms having a mean square error (MSE) higher than 0.001 L2/s2. Correction of the pump for these ten waveforms resulted in a mean decrease in MSE of 87.0%. When loaded with the Micro DiaryCard meter, the uncorrected pump outputs included six waveforms with MSE higher than 0.001 L2/s2. Pump corrections for these six waveforms resulted in a mean decrease in MSE of 58.4%.

  1. Comparison of Two Hybrid Models for Forecasting the Incidence of Hemorrhagic Fever with Renal Syndrome in Jiangsu Province, China

    PubMed Central

    Wu, Wei; Guo, Junqiao; An, Shuyi; Guan, Peng; Ren, Yangwu; Xia, Linzi; Zhou, Baosen

    2015-01-01

    Background Cases of hemorrhagic fever with renal syndrome (HFRS) are widely distributed in eastern Asia, especially in China, Russia, and Korea. It is proved to be a difficult task to eliminate HFRS completely because of the diverse animal reservoirs and effects of global warming. Reliable forecasting is useful for the prevention and control of HFRS. Methods Two hybrid models, one composed of nonlinear autoregressive neural network (NARNN) and autoregressive integrated moving average (ARIMA) the other composed of generalized regression neural network (GRNN) and ARIMA were constructed to predict the incidence of HFRS in the future one year. Performances of the two hybrid models were compared with ARIMA model. Results The ARIMA, ARIMA-NARNN ARIMA-GRNN model fitted and predicted the seasonal fluctuation well. Among the three models, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of ARIMA-NARNN hybrid model was the lowest both in modeling stage and forecasting stage. As for the ARIMA-GRNN hybrid model, the MSE, MAE and MAPE of modeling performance and the MSE and MAE of forecasting performance were less than the ARIMA model, but the MAPE of forecasting performance did not improve. Conclusion Developing and applying the ARIMA-NARNN hybrid model is an effective method to make us better understand the epidemic characteristics of HFRS and could be helpful to the prevention and control of HFRS. PMID:26270814

  2. FPGA Implementation of Optimal 3D-Integer DCT Structure for Video Compression

    PubMed Central

    2015-01-01

    A novel optimal structure for implementing 3D-integer discrete cosine transform (DCT) is presented by analyzing various integer approximation methods. The integer set with reduced mean squared error (MSE) and high coding efficiency are considered for implementation in FPGA. The proposed method proves that the least resources are utilized for the integer set that has shorter bit values. Optimal 3D-integer DCT structure is determined by analyzing the MSE, power dissipation, coding efficiency, and hardware complexity of different integer sets. The experimental results reveal that direct method of computing the 3D-integer DCT using the integer set [10, 9, 6, 2, 3, 1, 1] performs better when compared to other integer sets in terms of resource utilization and power dissipation. PMID:26601120

  3. A new method for determining the optimal lagged ensemble

    PubMed Central

    DelSole, T.; Tippett, M. K.; Pegion, K.

    2017-01-01

    Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050

  4. Adaptively combined FIR and functional link artificial neural network equalizer for nonlinear communication channel.

    PubMed

    Zhao, Haiquan; Zhang, Jiashu

    2009-04-01

    This paper proposes a novel computational efficient adaptive nonlinear equalizer based on combination of finite impulse response (FIR) filter and functional link artificial neural network (CFFLANN) to compensate linear and nonlinear distortions in nonlinear communication channel. This convex nonlinear combination results in improving the speed while retaining the lower steady-state error. In addition, since the CFFLANN needs not the hidden layers, which exist in conventional neural-network-based equalizers, it exhibits a simpler structure than the traditional neural networks (NNs) and can require less computational burden during the training mode. Moreover, appropriate adaptation algorithm for the proposed equalizer is derived by the modified least mean square (MLMS). Results obtained from the simulations clearly show that the proposed equalizer using the MLMS algorithm can availably eliminate various intensity linear and nonlinear distortions, and be provided with better anti-jamming performance. Furthermore, comparisons of the mean squared error (MSE), the bit error rate (BER), and the effect of eigenvalue ratio (EVR) of input correlation matrix are presented.

  5. Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties

    NASA Astrophysics Data System (ADS)

    Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing

    This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.

  6. Evaluation of two methods for using MR information in PET reconstruction

    NASA Astrophysics Data System (ADS)

    Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.

    2013-02-01

    Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed.

  7. Optimization of Artificial Neural Network using Evolutionary Programming for Prediction of Cascading Collapse Occurrence due to the Hidden Failure Effect

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.

    2018-03-01

    This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).

  8. Methods of evaluating the effects of coding on SAR data

    NASA Technical Reports Server (NTRS)

    Dutkiewicz, Melanie; Cumming, Ian

    1993-01-01

    It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important.

  9. Transmuted of Rayleigh Distribution with Estimation and Application on Noise Signal

    NASA Astrophysics Data System (ADS)

    Ahmed, Suhad; Qasim, Zainab

    2018-05-01

    This paper deals with transforming one parameter Rayleigh distribution, into transmuted probability distribution through introducing a new parameter (λ), since this studied distribution is necessary in representing signal data distribution and failure data model the value of this transmuted parameter |λ| ≤ 1, is also estimated as well as the original parameter (⊖) by methods of moments and maximum likelihood using different sample size (n=25, 50, 75, 100) and comparing the results of estimation by statistical measure (mean square error, MSE).

  10. Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT

    NASA Astrophysics Data System (ADS)

    Ubaidulla, P.; Chockalingam, A.

    2009-12-01

    We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.

  11. Assessing the blood volume and heart rate responses during haemodialysis in fluid overloaded patients using support vector regression.

    PubMed

    Javed, Faizan; Savkin, Andrey V; Chan, Gregory S H; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H

    2009-11-01

    This study aims to assess the blood volume and heart rate (HR) responses during haemodialysis in fluid overloaded patients by a nonparametric nonlinear regression approach based on a support vector machine (SVM). Relative blood volume (RBV) and electrocardiogram (ECG) was recorded from 23 haemodynamically stable renal failure patients during regular haemodialysis. Modelling was performed on 18 fluid overloaded patients (fluid removal of >2 L). SVM-based regression was used to obtain the models of RBV change with time as well as the percentage change in HR with respect to RBV. Mean squared error (MSE) and goodness of fit (R(2)) were used for comparison among different kernel functions. The design parameters were estimated using a grid search approach and the selected models were validated by a k-fold cross-validation technique. For the model of HR versus RBV change, a radial basis function (RBF) kernel (MSE = 17.37 and R(2) = 0.932) gave the least MSE compared to linear (MSE = 25.97 and R(2) = 0.898) and polynomial (MSE = 18.18 and R(2)= 0.929). The MSE was significantly lower for training data set when using RBF kernel compared to other kernels (p < 0.01). The RBF kernel also provided a slightly better fit of RBV change with time (MSE = 1.12 and R(2) = 0.91) compared to a linear kernel (MSE = 1.46 and R(2) = 0.88). The modelled HR response was characterized by an initial drop and a subsequent rise during progressive reduction in RBV, which may be interpreted as the reflex response to a transition from central hypervolaemia to hypovolaemia. These modelled curves can be used as references to a controller that can be designed to regulate the haemodynamic variables to ensure the stability of patients undergoing haemodialysis.

  12. Parametric and Nonparametric Statistical Methods for Genomic Selection of Traits with Additive and Epistatic Genetic Architectures

    PubMed Central

    Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.

    2014-01-01

    Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289

  13. Prediction of Classroom Reverberation Time using Neural Network

    NASA Astrophysics Data System (ADS)

    Liyana Zainudin, Fathin; Kadir Mahamad, Abd; Saon, Sharifah; Nizam Yahya, Musli

    2018-04-01

    In this paper, an alternative method for predicting the reverberation time (RT) using neural network (NN) for classroom was designed and explored. Classroom models were created using Google SketchUp software. The NN applied training dataset from the classroom models with RT values that were computed from ODEON 12.10 software. The NN was conducted separately for 500Hz, 1000Hz, and 2000Hz as absorption coefficient that is one of the prominent input variable is frequency dependent. Mean squared error (MSE) and regression (R) values were obtained to examine the NN efficiency. Overall, the NN shows a good result with MSE < 0.005 and R > 0.9. The NN also managed to achieve a percentage of accuracy of 92.53% for 500Hz, 93.66% for 1000Hz, and 93.18% for 2000Hz and thus displays a good and efficient performance. Nevertheless, the optimum RT value is range between 0.75 – 0.9 seconds.

  14. [Application of elastic registration based on Demons algorithm in cone beam CT].

    PubMed

    Pang, Haowen; Sun, Xiaoyang

    2014-02-01

    We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time.

  15. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  16. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  17. Water Level Prediction of Lake Cascade Mahakam Using Adaptive Neural Network Backpropagation (ANNBP)

    NASA Astrophysics Data System (ADS)

    Mislan; Gaffar, A. F. O.; Haviluddin; Puspitasari, N.

    2018-04-01

    A natural hazard information and flood events are indispensable as a form of prevention and improvement. One of the causes is flooding in the areas around the lake. Therefore, forecasting the surface of Lake water level to anticipate flooding is required. The purpose of this paper is implemented computational intelligence method namely Adaptive Neural Network Backpropagation (ANNBP) to forecasting the Lake Cascade Mahakam. Based on experiment, performance of ANNBP indicated that Lake water level prediction have been accurate by using mean square error (MSE) and mean absolute percentage error (MAPE). In other words, computational intelligence method can produce good accuracy. A hybrid and optimization of computational intelligence are focus in the future work.

  18. A survey of quality measures for gray-scale image compression

    NASA Technical Reports Server (NTRS)

    Eskicioglu, Ahmet M.; Fisher, Paul S.

    1993-01-01

    Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.

  19. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network.

    PubMed

    Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao

    2017-12-26

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.

  20. Low complexity adaptive equalizers for underwater acoustic communications

    NASA Astrophysics Data System (ADS)

    Soflaei, Masoumeh; Azmi, Paeiz

    2014-08-01

    Interference signals due to scattering from surface and reflecting from bottom is one of the most important problems of reliable communications in shallow water channels. To solve this problem, one of the best suggested ways is to use adaptive equalizers. Convergence rate and misadjustment error in adaptive algorithms play important roles in adaptive equalizer performance. In this paper, affine projection algorithm (APA), selective regressor APA(SR-APA), family of selective partial update (SPU) algorithms, family of set-membership (SM) algorithms and selective partial update selective regressor APA (SPU-SR-APA) are compared with conventional algorithms such as the least mean square (LMS) in underwater acoustic communications. We apply experimental data from the Strait of Hormuz for demonstrating the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE) of SR-APA, SPU-APA, SPU-normalized least mean square (SPU-NLMS), SPU-SR-APA, SM-APA and SM-NLMS algorithms decrease in comparison with the LMS algorithm. Also these algorithms have better convergence rates than LMS type algorithm.

  1. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    NASA Astrophysics Data System (ADS)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  2. Robust nonlinear canonical correlation analysis: application to seasonal climate forecasting

    NASA Astrophysics Data System (ADS)

    Cannon, A. J.; Hsieh, W. W.

    2008-02-01

    Robust variants of nonlinear canonical correlation analysis (NLCCA) are introduced to improve performance on datasets with low signal-to-noise ratios, for example those encountered when making seasonal climate forecasts. The neural network model architecture of standard NLCCA is kept intact, but the cost functions used to set the model parameters are replaced with more robust variants. The Pearson product-moment correlation in the double-barreled network is replaced by the biweight midcorrelation, and the mean squared error (mse) in the inverse mapping networks can be replaced by the mean absolute error (mae). Robust variants of NLCCA are demonstrated on a synthetic dataset and are used to forecast sea surface temperatures in the tropical Pacific Ocean based on the sea level pressure field. Results suggest that adoption of the biweight midcorrelation can lead to improved performance, especially when a strong, common event exists in both predictor/predictand datasets. Replacing the mse by the mae leads to improved performance on the synthetic dataset, but not on the climate dataset except at the longest lead time, which suggests that the appropriate cost function for the inverse mapping networks is more problem dependent.

  3. Network traffic anomaly prediction using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Ciptaningtyas, Hening Titi; Fatichah, Chastine; Sabila, Altea

    2017-03-01

    As the excessive increase of internet usage, the malicious software (malware) has also increase significantly. Malware is software developed by hacker for illegal purpose(s), such as stealing data and identity, causing computer damage, or denying service to other user[1]. Malware which attack computer or server often triggers network traffic anomaly phenomena. Based on Sophos's report[2], Indonesia is the riskiest country of malware attack and it also has high network traffic anomaly. This research uses Artificial Neural Network (ANN) to predict network traffic anomaly based on malware attack in Indonesia which is recorded by Id-SIRTII/CC (Indonesia Security Incident Response Team on Internet Infrastructure/Coordination Center). The case study is the highest malware attack (SQL injection) which has happened in three consecutive years: 2012, 2013, and 2014[4]. The data series is preprocessed first, then the network traffic anomaly is predicted using Artificial Neural Network and using two weight update algorithms: Gradient Descent and Momentum. Error of prediction is calculated using Mean Squared Error (MSE) [7]. The experimental result shows that MSE for SQL Injection is 0.03856. So, this approach can be used to predict network traffic anomaly.

  4. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  5. Accuracy of partial volume effect correction in clinical molecular imaging of dopamine transporter using SPECT

    NASA Astrophysics Data System (ADS)

    Soret, Marine; Alaoui, Jawad; Koulibaly, Pierre M.; Darcourt, Jacques; Buvat, Irène

    2007-02-01

    ObjectivesPartial volume effect (PVE) is a major source of bias in brain SPECT imaging of dopamine transporter. Various PVE corrections (PVC) making use of anatomical data have been developed and yield encouraging results. However, their accuracy in clinical data is difficult to demonstrate because the gold standard (GS) is usually unknown. The objective of this study was to assess the accuracy of PVC. MethodTwenty-three patients underwent MRI and 123I-FP-CIT SPECT. The binding potential (BP) values were measured in the striata segmented on the MR images after coregistration to SPECT images. These values were calculated without and with an original PVC. In addition, for each patient, a Monte Carlo simulation of the SPECT scan was performed. For these simulations where true simulated BP values were known, percent biases in BP estimates were calculated. For the real data, an evaluation method that simultaneously estimates the GS and a quadratic relationship between the observed and the GS values was used. It yields a surrogate mean square error (sMSE) between the estimated values and the estimated GS values. ResultsThe averaged percent difference between BP measured for real and for simulated patients was 0.7±9.7% without PVC and was -8.5±14.5% with PVC, suggesting that the simulated data reproduced the real data well enough. For the simulated patients, BP was underestimated by 66.6±9.3% on average without PVC and overestimated by 11.3±9.5% with PVC, demonstrating the greatest accuracy of BP estimates with PVC. For the simulated data, sMSE were 27.3 without PVC and 0.90 with PVC, confirming that our sMSE index properly captured the greatest accuracy of BP estimates with PVC. For the real patient data, sMSE was 50.8 without PVC and 3.5 with PVC. These results were consistent with those obtained on the simulated data, suggesting that for clinical data, and despite probable segmentation and registration errors, BP were more accurately estimated with PVC than without. ConclusionPVC was very efficient to greatly reduce the error in BP estimates in clinical imaging of dopamine transporter.

  6. Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.

  7. [Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].

    PubMed

    Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang

    2016-07-12

    To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.

  8. Life beyond MSE and R2 — improving validation of predictive models with observations

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Nussbaum, Madlene

    2017-04-01

    Machine learning and statistical predictive methods are evaluated by the closeness of predictions to observations of a test dataset. Common criteria for rating predictive methods are bias and mean square error (MSE), characterizing systematic and random prediction errors. Many studies also report R2-values, but their meaning is not always clear (correlation between observations and predictions or MSE skill score; Wilks, 2011). The same criteria are also used for choosing tuning parameters of predictive procedures by cross-validation and bagging (e.g. Hastie et al., 2009). For evident reasons, atmospheric sciences have developed a rich box of tools for forecast verification. Specific criteria have been proposed for evaluating deterministic and probabilistic predictions of binary, multinomial, ordinal and continuous responses (see reviews by Wilks, 2011, Jollie and Stephenson, 2012 and Gneiting et al., 2007). It appears that these techniques are not very well-known in the geosciences community interested in machine learning. In our presentation we review techniques that offer more insight into proximity of data and predictions than bias, MSE and R2 alone. We mention here only examples: (i) Graphing observations vs. predictions is usually more appropriate than the reverse (Piñeiro et al., 2008). (ii) The decomposition of the Brier score score (= MSE for probabilistic predictions of binary yes/no data) into reliability and resolution reveals (conditional) bias and capability of discriminating yes/no observations by the predictions. We illustrate the approaches by applications from digital soil mapping studies. Gneiting, T., Balabdaoui, F., and Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society Series B, 69, 243-268. Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning; Data Mining, Inference and Prediction. Springer, New York, second edition. Jolliffe, I. T. and Stephenson, D. B., editors (2012). Forecast Verification: A Practitioner's Guide in Atmospheric Science. Wiley-Blackwell, second edition. Piñeiro, G., Perelman, S., Guerschman, J., and Paruelo, J. (2008). How to evaluate models: Observed vs. predicted or predicted vs. observed? Ecological Modelling, 216, 316-322. Wilks, D. S. (2011). Statistical Methods in the Atmospheric Sciences. Academic Press, third edition.

  9. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network

    PubMed Central

    Zhou, Shenglu; Su, Quanlong; Yi, Haomin

    2017-01-01

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution. PMID:29278363

  10. The Covariance Adjustment Approaches for Combining Incomparable Cox Regressions Caused by Unbalanced Covariates Adjustment: A Multivariate Meta-Analysis Study.

    PubMed

    Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi

    2015-01-01

    Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.

  11. Image quality evaluation of full reference algorithm

    NASA Astrophysics Data System (ADS)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  12. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  13. Prediction of municipal solid waste generation using nonlinear autoregressive network.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A

    2015-12-01

    Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.

  14. [Predicting Incidence of Hepatitis E in Chinausing Fuzzy Time Series Based on Fuzzy C-Means Clustering Analysis].

    PubMed

    Luo, Yi; Zhang, Tao; Li, Xiao-song

    2016-05-01

    To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.

  15. Fluorescence background removal method for biological Raman spectroscopy based on empirical mode decomposition.

    PubMed

    Leon-Bejarano, Maritza; Dorantes-Mendez, Guadalupe; Ramirez-Elias, Miguel; Mendez, Martin O; Alba, Alfonso; Rodriguez-Leyva, Ildefonso; Jimenez, M

    2016-08-01

    Raman spectroscopy of biological tissue presents fluorescence background, an undesirable effect that generates false Raman intensities. This paper proposes the application of the Empirical Mode Decomposition (EMD) method to baseline correction. EMD is a suitable approach since it is an adaptive signal processing method for nonlinear and non-stationary signal analysis that does not require parameters selection such as polynomial methods. EMD performance was assessed through synthetic Raman spectra with different signal to noise ratio (SNR). The correlation coefficient between synthetic Raman spectra and the recovered one after EMD denoising was higher than 0.92. Additionally, twenty Raman spectra from skin were used to evaluate EMD performance and the results were compared with Vancouver Raman algorithm (VRA). The comparison resulted in a mean square error (MSE) of 0.001554. High correlation coefficient using synthetic spectra and low MSE in the comparison between EMD and VRA suggest that EMD could be an effective method to remove fluorescence background in biological Raman spectra.

  16. Estimated spectrum adaptive postfilter and the iterative prepost filtering algirighms

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2004-01-01

    The invention presents The Estimated Spectrum Adaptive Postfilter (ESAP) and the Iterative Prepost Filter (IPF) algorithms. These algorithms model a number of image-adaptive post-filtering and pre-post filtering methods. They are designed to minimize Discrete Cosine Transform (DCT) blocking distortion caused when images are highly compressed with the Joint Photographic Expert Group (JPEG) standard. The ESAP and the IPF techniques of the present invention minimize the mean square error (MSE) to improve the objective and subjective quality of low-bit-rate JPEG gray-scale images while simultaneously enhancing perceptual visual quality with respect to baseline JPEG images.

  17. Analysis Resilient Algorithm on Artificial Neural Network Backpropagation

    NASA Astrophysics Data System (ADS)

    Saputra, Widodo; Tulus; Zarlis, Muhammad; Widia Sembiring, Rahmat; Hartama, Dedy

    2017-12-01

    Prediction required by decision makers to anticipate future planning. Artificial Neural Network (ANN) Backpropagation is one of method. This method however still has weakness, for long training time. This is a reason to improve a method to accelerate the training. One of Artificial Neural Network (ANN) Backpropagation method is a resilient method. Resilient method of changing weights and bias network with direct adaptation process of weighting based on local gradient information from every learning iteration. Predicting data result of Istanbul Stock Exchange training getting better. Mean Square Error (MSE) value is getting smaller and increasing accuracy.

  18. MSE-impact of PPP-RTK ZTD estimation strategies

    NASA Astrophysics Data System (ADS)

    Wang, K.; Khodabandeh, A.; Teunissen, P. J. G.

    2018-06-01

    In PPP-RTK network processing, the wet component of the zenith tropospheric delay (ZTD) cannot be precisely modelled and thus remains unknown in the observation equations. For small networks, the tropospheric mapping functions of different stations to a given satellite are almost equal to each other, thereby causing a near rank-deficiency between the ZTDs and satellite clocks. The stated near rank-deficiency can be solved by estimating the wet ZTD components relatively to that of the reference receiver, while the wet ZTD component of the reference receiver is constrained to zero. However, by increasing network scale and humidity around the reference receiver, enlarged mismodelled effects could bias the network and the user solutions. To consider both the influences of the noise and the biases, the mean-squared errors (MSEs) of different network and user parameters are studied analytically employing both the ZTD estimation strategies. We conclude that for a certain set of parameters, the difference in their MSE structures using both strategies is only driven by the square of the reference wet ZTD component and the formal variance of its solution. Depending on the network scale and the humidity condition around the reference receiver, the ZTD estimation strategy that delivers more accurate solutions might be different. Simulations are performed to illustrate the conclusions made by analytical studies. We find that estimating the ZTDs relatively in large networks and humid regions (for the reference receiver) could significantly degrade the network ambiguity success rates. Using ambiguity-fixed network-derived PPP-RTK corrections, for networks with an inter-station distance within 100 km, the choices of the ZTD estimation strategy is not crucial for single-epoch ambiguity-fixed user positioning. Using ambiguity-float network corrections, for networks with inter-station distances of 100, 300 and 500 km in humid regions (for the reference receiver), the root-mean-squared errors (RMSEs) of the estimated user coordinates using relative ZTD estimation could be higher than those under the absolute case with differences up to millimetres, centimetres and decimetres, respectively.

  19. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  20. Evaluating the performance of the Lee-Carter method and its variants in modelling and forecasting Malaysian mortality

    NASA Astrophysics Data System (ADS)

    Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.

    2014-12-01

    This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.

  1. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    PubMed

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  2. Empirical best linear unbiased prediction method for small areas with restricted maximum likelihood and bootstrap procedure to estimate the average of household expenditure per capita in Banjar Regency

    NASA Astrophysics Data System (ADS)

    Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho

    2017-03-01

    So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.

  3. Mapping CHU9D Utility Scores from the PedsQLTM 4.0 SF-15.

    PubMed

    Mpundu-Kaambwa, Christine; Chen, Gang; Russo, Remo; Stevens, Katherine; Petersen, Karin Dam; Ratcliffe, Julie

    2017-04-01

    The Pediatric Quality of Life Inventory™ 4.0 Short Form 15 Generic Core Scales (hereafter the PedsQL) and the Child Health Utility-9 Dimensions (CHU9D) are two generic instruments designed to measure health-related quality of life in children and adolescents in the general population and paediatric patient groups living with specific health conditions. Although the PedsQL is widely used among paediatric patient populations, presently it is not possible to directly use the scores from the instrument to calculate quality-adjusted life-years (QALYs) for application in economic evaluation because it produces summary scores which are not preference-based. This paper examines different econometric mapping techniques for estimating CHU9D utility scores from the PedsQL for the purpose of calculating QALYs for cost-utility analysis. The PedsQL and the CHU9D were completed by a community sample of 755 Australian adolescents aged 15-17 years. Seven regression models were estimated: ordinary least squares estimator, generalised linear model, robust MM estimator, multivariate factorial polynomial estimator, beta-binomial estimator, finite mixture model and multinomial logistic model. The mean absolute error (MAE) and the mean squared error (MSE) were used to assess predictive ability of the models. The MM estimator with stepwise-selected PedsQL dimension scores as explanatory variables had the best predictive accuracy using MAE and the equivalent beta-binomial model had the best predictive accuracy using MSE. Our mapping algorithm facilitates the estimation of health-state utilities for use within economic evaluations where only PedsQL data is available and is suitable for use in community-based adolescents aged 15-17 years. Applicability of the algorithm in younger populations should be assessed in further research.

  4. SU-C-207B-06: Comparison of Registration Methods for Modeling Pathologic Response of Esophageal Cancer to Chemoradiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riyahi, S; Choi, W; Bhooshan, N

    2016-06-15

    Purpose: To compare linear and deformable registration methods for evaluation of tumor response to Chemoradiation therapy (CRT) in patients with esophageal cancer. Methods: Linear and multi-resolution BSpline deformable registration were performed on Pre-Post-CRT CT/PET images of 20 patients with esophageal cancer. For both registration methods, we registered CT using Mean Square Error (MSE) metric, however to register PET we used transformation obtained using Mutual Information (MI) from the same CT due to being multi-modality. Similarity of Warped-CT/PET was quantitatively evaluated using Normalized Mutual Information and plausibility of DF was assessed using inverse consistency Error. To evaluate tumor response four groupsmore » of tumor features were examined: (1) Conventional PET/CT e.g. SUV, diameter (2) Clinical parameters e.g. TNM stage, histology (3)spatial-temporal PET features that describe intensity, texture and geometry of tumor (4)all features combined. Dominant features were identified using 10-fold cross-validation and Support Vector Machine (SVM) was deployed for tumor response prediction while the accuracy was evaluated by ROC Area Under Curve (AUC). Results: Average and standard deviation of Normalized mutual information for deformable registration using MSE was 0.2±0.054 and for linear registration was 0.1±0.026, showing higher NMI for deformable registration. Likewise for MI metric, deformable registration had 0.13±0.035 comparing to linear counterpart with 0.12±0.037. Inverse consistency error for deformable registration for MSE metric was 4.65±2.49 and for linear was 1.32±2.3 showing smaller value for linear registration. The same conclusion was obtained for MI in terms of inverse consistency error. AUC for both linear and deformable registration was 1 showing no absolute difference in terms of response evaluation. Conclusion: Deformable registration showed better NMI comparing to linear registration, however inverse consistency of transformation was lower in linear registration. We do not expect to see significant difference when warping PET images using deformable or linear registration. This work was supported in part by the National Cancer Institute Grants R01CA172638.« less

  5. Comparing the cohort design and the nested case–control design in the presence of both time-invariant and time-dependent treatment and competing risks: bias and precision

    PubMed Central

    Austin, Peter C; Anderson, Geoffrey M; Cigsar, Candemir; Gruneir, Andrea

    2012-01-01

    Purpose Observational studies using electronic administrative healthcare databases are often used to estimate the effects of treatments and exposures. Traditionally, a cohort design has been used to estimate these effects, but increasingly, studies are using a nested case–control (NCC) design. The relative statistical efficiency of these two designs has not been examined in detail. Methods We used Monte Carlo simulations to compare these two designs in terms of the bias and precision of effect estimates. We examined three different settings: (A) treatment occurred at baseline, and there was a single outcome of interest; (B) treatment was time varying, and there was a single outcome; and C treatment occurred at baseline, and there was a secondary event that competed with the primary event of interest. Comparisons were made of percentage bias, length of 95% confidence interval, and mean squared error (MSE) as a combined measure of bias and precision. Results In Setting A, bias was similar between designs, but the cohort design was more precise and had a lower MSE in all scenarios. In Settings B and C, the cohort design was more precise and had a lower MSE in all scenarios. In both Settings B and C, the NCC design tended to result in estimates with greater bias compared with the cohort design. Conclusions We conclude that in a range of settings and scenarios, the cohort design is superior in terms of precision and MSE. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22653805

  6. Improving Marine Corps Assignment of SDAP Levels

    DTIC Science & Technology

    2013-03-01

    Nettles , Colonel, USMC, M&RA (MPO). 15 Information Paper SDAP. 17 for a job or being able to fill the position at all, in addition to the potential to...Total 7850672.93 60132 130.557323 Root MSE = 6.2241... Root MSE = .26638 Adj R-squared = 0.0097 Residual 4577.29534 64507 .070958118

  7. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  8. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  9. RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error.

    PubMed

    Huang, Chengqiang; Yang, Youchang; Wu, Bo; Yu, Weize

    2018-06-01

    The sub-pixel arrangement of the RGBG panel and the image with RGB format are different and the algorithm that converts RGB to RGBG is urgently needed to display an image with RGB arrangement on the RGBG panel. However, the information loss is still large although color fringing artifacts are weakened in the published papers that study this conversion. In this paper, an RGB-to-RGBG conversion algorithm with adaptive weighting factors based on edge detection and minimal square error (EDMSE) is proposed. The main points of innovation include the following: (1) the edge detection is first proposed to distinguish image details with serious color fringing artifacts and image details which are prone to be lost in the process of RGB-RGBG conversion; (2) for image details with serious color fringing artifacts, the weighting factor 0.5 is applied to weaken the color fringing artifacts; and (3) for image details that are prone to be lost in the process of RGB-RGBG conversion, a special mechanism to minimize square error is proposed. The experiment shows that the color fringing artifacts are slightly improved by EDMSE, and the values of MSE of the image processed are 19.6% and 7% smaller than those of the image processed by the direct assignment and weighting factor algorithm, respectively. The proposed algorithm is implemented on a field programmable gate array to enable the image display on the RGBG panel.

  10. Comparison of image segmentation of lungs using methods: connected threshold, neighborhood connected, and threshold level set segmentation

    NASA Astrophysics Data System (ADS)

    Amanda, A. R.; Widita, R.

    2016-03-01

    The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.

  11. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    PubMed Central

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  12. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  13. Estimation of color filter array data from JPEG images for improved demosaicking

    NASA Astrophysics Data System (ADS)

    Feng, Wei; Reeves, Stanley J.

    2006-02-01

    On-camera demosaicking algorithms are necessarily simple and therefore do not yield the best possible images. However, off-camera demosaicking algorithms face the additional challenge that the data has been compressed and therefore corrupted by quantization noise. We propose a method to estimate the original color filter array (CFA) data from JPEG-compressed images so that more sophisticated (and better) demosaicking schemes can be applied to get higher-quality images. The JPEG image formation process, including simple demosaicking, color space transformation, chrominance channel decimation and DCT, is modeled as a series of matrix operations followed by quantization on the CFA data, which is estimated by least squares. An iterative method is used to conserve memory and speed computation. Our experiments show that the mean square error (MSE) with respect to the original CFA data is reduced significantly using our algorithm, compared to that of unprocessed JPEG and deblocked JPEG data.

  14. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    PubMed Central

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  15. Parameter estimation method and updating of regional prediction equations for ungaged sites in the desert region of California

    USGS Publications Warehouse

    Barth, Nancy A.; Veilleux, Andrea G.

    2012-01-01

    The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.

  16. Kramers-Kronig based quality factor for shear wave propagation in soft tissue

    PubMed Central

    Urban, M W; Greenleaf, J F

    2009-01-01

    Shear wave propagation techniques have been introduced for measuring the viscoelastic material properties of tissue, but assessing the accuracy of these measurements is difficult for in vivo measurements in tissue. We propose using the Kramers-Kronig relationships to assess the consistency and quality of the measurements of shear wave attenuation and phase velocity. In ex vivo skeletal muscle we measured the wave attenuation at different frequencies, and then applied finite bandwidth Kramers-Kronig equations to predict the phase velocities. We compared these predictions with the measured phase velocities and assessed the mean square error (MSE) as a quality factor. An algorithm was derived for computing a quality factor using the Kramers-Kronig relationships. PMID:19759409

  17. Mapping health outcome measures from a stroke registry to EQ-5D weights.

    PubMed

    Ghatnekar, Ola; Eriksson, Marie; Glader, Eva-Lotta

    2013-03-07

    To map health outcome related variables from a national register, not part of any validated instrument, with EQ-5D weights among stroke patients. We used two cross-sectional data sets including patient characteristics, outcome variables and EQ-5D weights from the national Swedish stroke register. Three regression techniques were used on the estimation set (n=272): ordinary least squares (OLS), Tobit, and censored least absolute deviation (CLAD). The regression coefficients for "dressing", "toileting", "mobility", "mood", "general health" and "proxy-responders" were applied to the validation set (n=272), and the performance was analysed with mean absolute error (MAE) and mean square error (MSE). The number of statistically significant coefficients varied by model, but all models generated consistent coefficients in terms of sign. Mean utility was underestimated in all models (least in OLS) and with lower variation (least in OLS) compared to the observed. The maximum attainable EQ-5D weight ranged from 0.90 (OLS) to 1.00 (Tobit and CLAD). Health states with utility weights <0.5 had greater errors than those with weights ≥ 0.5 (P<0.01). This study indicates that it is possible to map non-validated health outcome measures from a stroke register into preference-based utilities to study the development of stroke care over time, and to compare with other conditions in terms of utility.

  18. Predicting Length of Stay in Intensive Care Units after Cardiac Surgery: Comparison of Artificial Neural Networks and Adaptive Neuro-fuzzy System.

    PubMed

    Maharlou, Hamidreza; Niakan Kalhori, Sharareh R; Shahbazi, Shahrbanoo; Ravangard, Ramin

    2018-04-01

    Accurate prediction of patients' length of stay is highly important. This study compared the performance of artificial neural network and adaptive neuro-fuzzy system algorithms to predict patients' length of stay in intensive care units (ICU) after cardiac surgery. A cross-sectional, analytical, and applied study was conducted. The required data were collected from 311 cardiac patients admitted to intensive care units after surgery at three hospitals of Shiraz, Iran, through a non-random convenience sampling method during the second quarter of 2016. Following the initial processing of influential factors, models were created and evaluated. The results showed that the adaptive neuro-fuzzy algorithm (with mean squared error [MSE] = 7 and R = 0.88) resulted in the creation of a more precise model than the artificial neural network (with MSE = 21 and R = 0.60). The adaptive neuro-fuzzy algorithm produces a more accurate model as it applies both the capabilities of a neural network architecture and experts' knowledge as a hybrid algorithm. It identifies nonlinear components, yielding remarkable results for prediction the length of stay, which is a useful calculation output to support ICU management, enabling higher quality of administration and cost reduction.

  19. Cancer Detection in Microarray Data Using a Modified Cat Swarm Optimization Clustering Approach

    PubMed

    M, Pandi; R, Balamurugan; N, Sadhasivam

    2017-12-29

    Objective: A better understanding of functional genomics can be obtained by extracting patterns hidden in gene expression data. This could have paramount implications for cancer diagnosis, gene treatments and other domains. Clustering may reveal natural structures and identify interesting patterns in underlying data. The main objective of this research was to derive a heuristic approach to detection of highly co-expressed genes related to cancer from gene expression data with minimum Mean Squared Error (MSE). Methods: A modified CSO algorithm using Harmony Search (MCSO-HS) for clustering cancer gene expression data was applied. Experiment results are analyzed using two cancer gene expression benchmark datasets, namely for leukaemia and for breast cancer. Result: The results indicated MCSO-HS to be better than HS and CSO, 13% and 9% with the leukaemia dataset. For breast cancer dataset improvement was by 22% and 17%, respectively, in terms of MSE. Conclusion: The results showed MCSO-HS to outperform HS and CSO with both benchmark datasets. To validate the clustering results, this work was tested with internal and external cluster validation indices. Also this work points to biological validation of clusters with gene ontology in terms of function, process and component. Creative Commons Attribution License

  20. Automatic Segmentation of Fluorescence Lifetime Microscopy Images of Cells Using Multi-Resolution Community Detection -A First Study

    PubMed Central

    Hu, Dandan; Sarder, Pinaki; Ronhovde, Peter; Orthaus, Sandra; Achilefu, Samuel; Nussinov, Zohar

    2014-01-01

    Inspired by a multi-resolution community detection (MCD) based network segmentation method, we suggest an automatic method for segmenting fluorescence lifetime (FLT) imaging microscopy (FLIM) images of cells in a first pilot investigation on two selected images. The image processing problem is framed as identifying segments with respective average FLTs against the background in FLIM images. The proposed method segments a FLIM image for a given resolution of the network defined using image pixels as the nodes and similarity between the FLTs of the pixels as the edges. In the resulting segmentation, low network resolution leads to larger segments, and high network resolution leads to smaller segments. Further, using the proposed method, the mean-square error (MSE) in estimating the FLT segments in a FLIM image was found to consistently decrease with increasing resolution of the corresponding network. The MCD method appeared to perform better than a popular spectral clustering based method in performing FLIM image segmentation. At high resolution, the spectral segmentation method introduced noisy segments in its output, and it was unable to achieve a consistent decrease in MSE with increasing resolution. PMID:24251410

  1. Performance of informative priors skeptical of large treatment effects in clinical trials: A simulation study.

    PubMed

    Pedroza, Claudia; Han, Weilu; Thanh Truong, Van Thi; Green, Charles; Tyson, Jon E

    2018-01-01

    One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student- t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50-2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23-4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.

  2. RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing

    NASA Astrophysics Data System (ADS)

    Gui, Guan; Xu, Li; Adachi, Fumiyuki

    2014-12-01

    Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.

  3. Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)

    NASA Astrophysics Data System (ADS)

    Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar

    2012-09-01

    The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.

  4. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  5. Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.

    PubMed

    Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa

    2016-03-23

    We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.

  6. Robustness of speckle imaging techniques applied to horizontal imaging scenarios

    NASA Astrophysics Data System (ADS)

    Bos, Jeremy P.

    Atmospheric turbulence near the ground severely limits the quality of imagery acquired over long horizontal paths. In defense, surveillance, and border security applications, there is interest in deploying man-portable, embedded systems incorporating image reconstruction to improve the quality of imagery available to operators. To be effective, these systems must operate over significant variations in turbulence conditions while also subject to other variations due to operation by novice users. Systems that meet these requirements and are otherwise designed to be immune to the factors that cause variation in performance are considered robust. In addition to robustness in design, the portable nature of these systems implies a preference for systems with a minimum level of computational complexity. Speckle imaging methods are one of a variety of methods recently been proposed for use in man-portable horizontal imagers. In this work, the robustness of speckle imaging methods is established by identifying a subset of design parameters that provide immunity to the expected variations in operating conditions while minimizing the computation time necessary for image recovery. This performance evaluation is made possible using a novel technique for simulating anisoplanatic image formation. I find that incorporate as few as 15 image frames and 4 estimates of the object phase per reconstructed frame provide an average reduction of 45% reduction in Mean Squared Error (MSE) and 68% reduction in deviation in MSE. In addition, the Knox-Thompson phase recovery method is demonstrated to produce images in half the time required by the bispectrum. Finally, it is shown that certain blind image quality metrics can be used in place of the MSE to evaluate reconstruction quality in field scenarios. Using blind metrics rather depending on user estimates allows for reconstruction quality that differs from the minimum MSE by as little as 1%, significantly reducing the deviation in performance due to user action.

  7. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  8. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  9. Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems

    PubMed Central

    Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S.; Agarwal, Dev P.

    2015-01-01

    Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data. PMID:26366169

  10. Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems.

    PubMed

    Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S; Agarwal, Dev P

    2015-01-01

    Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data.

  11. Mapping health outcome measures from a stroke registry to EQ-5D weights

    PubMed Central

    2013-01-01

    Purpose To map health outcome related variables from a national register, not part of any validated instrument, with EQ-5D weights among stroke patients. Methods We used two cross-sectional data sets including patient characteristics, outcome variables and EQ-5D weights from the national Swedish stroke register. Three regression techniques were used on the estimation set (n = 272): ordinary least squares (OLS), Tobit, and censored least absolute deviation (CLAD). The regression coefficients for “dressing“, “toileting“, “mobility”, “mood”, “general health” and “proxy-responders” were applied to the validation set (n = 272), and the performance was analysed with mean absolute error (MAE) and mean square error (MSE). Results The number of statistically significant coefficients varied by model, but all models generated consistent coefficients in terms of sign. Mean utility was underestimated in all models (least in OLS) and with lower variation (least in OLS) compared to the observed. The maximum attainable EQ-5D weight ranged from 0.90 (OLS) to 1.00 (Tobit and CLAD). Health states with utility weights <0.5 had greater errors than those with weights ≥0.5 (P < 0.01). Conclusion This study indicates that it is possible to map non-validated health outcome measures from a stroke register into preference-based utilities to study the development of stroke care over time, and to compare with other conditions in terms of utility. PMID:23496957

  12. Analysis backpropagation methods with neural network for prediction of children's ability in psychomotoric

    NASA Astrophysics Data System (ADS)

    Izhari, F.; Dhany, H. W.; Zarlis, M.; Sutarman

    2018-03-01

    A good age in optimizing aspects of development is at the age of 4-6 years, namely with psychomotor development. Psychomotor is broader, more difficult to monitor but has a meaningful value for the child's life because it directly affects his behavior and deeds. Therefore, there is a problem to predict the child's ability level based on psychomotor. This analysis uses backpropagation method analysis with artificial neural network to predict the ability of the child on the psychomotor aspect by generating predictions of the child's ability on psychomotor and testing there is a mean squared error (MSE) value at the end of the training of 0.001. There are 30% of children aged 4-6 years have a good level of psychomotor ability, excellent, less good, and good enough.

  13. Cortical Decoding of Individual Finger and Wrist Kinematics for an Upper-Limb Neuroprosthesis

    PubMed Central

    Aggarwal, Vikram; Tenore, Francesco; Acharya, Soumyadipta; Schieber, Marc H.; Thakor, Nitish V.

    2010-01-01

    Previous research has shown that neuronal activity can be used to continuously decode the kinematics of gross movements involving arm and hand trajectory. However, decoding the kinematics of fine motor movements, such as the manipulation of individual fingers, has not been demonstrated. In this study, single unit activities were recorded from task-related neurons in M1 of two trained rhesus monkey as they performed individuated movements of the fingers and wrist. The primates’ hand was placed in a manipulandum, and strain gauges at the tips of each finger were used to track the digit’s position. Both linear and non-linear filters were designed to simultaneously predict kinematics of each digit and the wrist, and their performance compared using mean squared error and correlation coefficients. All models had high decoding accuracy, but the feedforward ANN (R=0.76–0.86, MSE=0.04–0.05) and Kalman filter (R=0.68–0.86, MSE=0.04–0.07) performed better than a simple linear regression filter (0.58–0.81, 0.05–0.07). These results suggest that individual finger and wrist kinematics can be decoded with high accuracy, and be used to control a multi-fingered prosthetic hand in real-time. PMID:19964645

  14. Predicting temperature drop rate of mass concrete during an initial cooling period using genetic programming

    NASA Astrophysics Data System (ADS)

    Bhattarai, Santosh; Zhou, Yihong; Zhao, Chunju; Zhou, Huawei

    2018-02-01

    Thermal cracking on concrete dams depends upon the rate at which the concrete is cooled (temperature drop rate per day) within an initial cooling period during the construction phase. Thus, in order to control the thermal cracking of such structure, temperature development due to heat of hydration of cement should be dropped at suitable rate. In this study, an attempt have been made to formulate the relation between cooling rate of mass concrete with passage of time (age of concrete) and water cooling parameters: flow rate and inlet temperature of cooling water. Data measured at summer season (April-August from 2009 to 2012) from recently constructed high concrete dam were used to derive a prediction model with the help of Genetic Programming (GP) software “Eureqa”. Coefficient of Determination (R) and Mean Square Error (MSE) were used to evaluate the performance of the model. The value of R and MSE is 0.8855 and 0.002961 respectively. Sensitivity analysis was performed to evaluate the relative impact on the target parameter due to input parameters. Further, testing the proposed model with an independent dataset those not included during analysis, results obtained from the proposed GP model are close enough to the real field data.

  15. Mathematical modelling of temperature effect on growth kinetics of Pseudomonas spp. on sliced mushroom (Agaricus bisporus).

    PubMed

    Tarlak, Fatih; Ozdemir, Murat; Melikoglu, Mehmet

    2018-02-02

    The growth data of Pseudomonas spp. on sliced mushrooms (Agaricus bisporus) stored between 4 and 28°C were obtained and fitted to three different primary models, known as the modified Gompertz, logistic and Baranyi models. The goodness of fit of these models was compared by considering the mean squared error (MSE) and the coefficient of determination for nonlinear regression (pseudo-R 2 ). The Baranyi model yielded the lowest MSE and highest pseudo-R 2 values. Therefore, the Baranyi model was selected as the best primary model. Maximum specific growth rate (r max ) and lag phase duration (λ) obtained from the Baranyi model were fitted to secondary models namely, the Ratkowsky and Arrhenius models. High pseudo-R 2 and low MSE values indicated that the Arrhenius model has a high goodness of fit to determine the effect of temperature on r max . Observed number of Pseudomonas spp. on sliced mushrooms from independent experiments was compared with the predicted number of Pseudomonas spp. with the models used by considering the B f and A f values. The B f and A f values were found to be 0.974 and 1.036, respectively. The correlation between the observed and predicted number of Pseudomonas spp. was high. Mushroom spoilage was simulated as a function of temperature with the models used. The models used for Pseudomonas spp. growth can provide a fast and cost-effective alternative to traditional microbiological techniques to determine the effect of storage temperature on product shelf-life. The models can be used to evaluate the growth behaviour of Pseudomonas spp. on sliced mushroom, set limits for the quantitative detection of the microbial spoilage and assess product shelf-life. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans

    PubMed Central

    Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa

    2016-01-01

    Background: We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. Methods: We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. Results: The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. Conclusions: The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis. PMID:27023573

  17. Application of a Combined Model with Autoregressive Integrated Moving Average (ARIMA) and Generalized Regression Neural Network (GRNN) in Forecasting Hepatitis Incidence in Heng County, China

    PubMed Central

    Liang, Hao; Gao, Lian; Liang, Bingyu; Huang, Jiegang; Zang, Ning; Liao, Yanyan; Yu, Jun; Lai, Jingzhen; Qin, Fengxiang; Su, Jinming; Ye, Li; Chen, Hui

    2016-01-01

    Background Hepatitis is a serious public health problem with increasing cases and property damage in Heng County. It is necessary to develop a model to predict the hepatitis epidemic that could be useful for preventing this disease. Methods The autoregressive integrated moving average (ARIMA) model and the generalized regression neural network (GRNN) model were used to fit the incidence data from the Heng County CDC (Center for Disease Control and Prevention) from January 2005 to December 2012. Then, the ARIMA-GRNN hybrid model was developed. The incidence data from January 2013 to December 2013 were used to validate the models. Several parameters, including mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE) and mean square error (MSE), were used to compare the performance among the three models. Results The morbidity of hepatitis from Jan 2005 to Dec 2012 has seasonal variation and slightly rising trend. The ARIMA(0,1,2)(1,1,1)12 model was the most appropriate one with the residual test showing a white noise sequence. The smoothing factor of the basic GRNN model and the combined model was 1.8 and 0.07, respectively. The four parameters of the hybrid model were lower than those of the two single models in the validation. The parameters values of the GRNN model were the lowest in the fitting of the three models. Conclusions The hybrid ARIMA-GRNN model showed better hepatitis incidence forecasting in Heng County than the single ARIMA model and the basic GRNN model. It is a potential decision-supportive tool for controlling hepatitis in Heng County. PMID:27258555

  18. Seasonality and Trend Forecasting of Tuberculosis Prevalence Data in Eastern Cape, South Africa, Using a Hybrid Model.

    PubMed

    Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin

    2016-07-26

    Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.

  19. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  20. Population pharmacokinetics modeling of oxcarbazepine to characterize drug interactions in Chinese children with epilepsy

    PubMed Central

    Wang, Yang; Zhang, Hua-nian; Niu, Chang-he; Gao, Ping; Chen, Yu-jun; Peng, Jing; Liu, Mao-chang; Xu, Hua

    2014-01-01

    Aim: To develop a population pharmacokinetics model of oxcarbazepine in Chinese pediatric patients with epilepsy, and to study the interactions between oxcarbazepine and other antiepileptic drugs (AEDs). Methods: A total of 688 patients with epilepsy aged 2 months to 18 years were divided into model (n=573) and valid (n=115) groups. Serum concentrations of the main active metabolite of oxcarbazepine, 10-hydroxycarbazepine (MHD), were determined 0.5–48 h after the last dosage. A population pharmacokinetics (PPK) model was constructed using NLME software. This model was internally evaluated using Bootstrapping and goodness-of-fit plots inspection. The data of the valid group were used to calculate the mean prediction error (MPE), mean absolute prediction error (MAE), mean squared prediction error (MSE) and the 95% confidence intervals (95% CI) to externally evaluate the model. Results: The population values of pharmacokinetic parameters estimated in the final model were as follows: Ka=0.83 h-1, Vd=0.67 L/kg, and CL=0.035 L·kg−1·h−1. The enzyme-inducing AEDs (carbamazepine, phenytoin, phenobarbital) and newer generation AEDs (levetiracetam, lamotrigine, topiramate) increased the weight-normalized CL value of MHD by 17.4% and 10.5%, respectively, whereas the enzyme-inhibiting AED valproic acid decreased it by 3%. No significant association was found between the CL value of MHD and the other covariates. For the final model, the evaluation results (95% CI) were MPE=0.01 (−0.07–0.10) mg/L, MAE=0.46 (0.40–0.51) mg/L, MSE=0.39 (0.27–0.51) (mg/L)2. Conclusion: A PPK model of OXC in Chinese pediatric patients with epilepsy is established. The enzyme-inducing AEDs and some newer generation AEDs (lamotrigine, topiramate) could slightly increase the metabolism of MHD. PMID:25220641

  1. Designing an artificial neural network using radial basis function to model exergetic efficiency of nanofluids in mini double pipe heat exchanger

    NASA Astrophysics Data System (ADS)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-06-01

    The present study aims at predicting and optimizing exergetic efficiency of TiO2-Al2O3/water nanofluid at different Reynolds numbers, volume fractions and twisted ratios using Artificial Neural Networks (ANN) and experimental data. Central Composite Design (CCD) and cascade Radial Basis Function (RBF) were used to display the significant levels of the analyzed factors on the exergetic efficiency. The size of TiO2-Al2O3/water nanocomposite was 20-70 nm. The parameters of ANN model were adapted by a training algorithm of radial basis function (RBF) with a wide range of experimental data set. Total mean square error and correlation coefficient were used to evaluate the results which the best result was obtained from double layer perceptron neural network with 30 neurons in which total Mean Square Error(MSE) and correlation coefficient (R2) were equal to 0.002 and 0.999, respectively. This indicated successful prediction of the network. Moreover, the proposed equation for predicting exergetic efficiency was extremely successful. According to the optimal curves, the optimum designing parameters of double pipe heat exchanger with inner twisted tape and nanofluid under the constrains of exergetic efficiency 0.937 are found to be Reynolds number 2500, twisted ratio 2.5 and volume fraction( v/v%) 0.05.

  2. Applying Regression Analysis to Problems in Institutional Research.

    ERIC Educational Resources Information Center

    Bohannon, Tom R.

    1988-01-01

    Regression analysis is one of the most frequently used statistical techniques in institutional research. Principles of least squares, model building, residual analysis, influence statistics, and multi-collinearity are described and illustrated. (Author/MSE)

  3. Study of on-board compression of earth resources data

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1975-01-01

    The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed.

  4. Optimization of wavefront coding imaging system using heuristic algorithms

    NASA Astrophysics Data System (ADS)

    González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Zermeño-Loreto, O.

    2017-08-01

    Wavefront Coding (WFC) systems make use of an aspheric Phase-Mask (PM) and digital image processing to extend the Depth of Field (EDoF) of computational imaging systems. For years, several kinds of PM have been designed to produce a point spread function (PSF) near defocus-invariant. In this paper, the optimization of the phase deviation parameter is done by means of genetic algorithms (GAs). In this, the merit function minimizes the mean square error (MSE) between the diffraction limited Modulated Transfer Function (MTF) and the MTF of the system that is wavefront coded with different misfocus. WFC systems were simulated using the cubic, trefoil, and 4 Zernike polynomials phase-masks. Numerical results show defocus invariance aberration in all cases. Nevertheless, the best results are obtained by using the trefoil phase-mask, because the decoded image is almost free of artifacts.

  5. A fast-initializing digital equalizer with on-line tracking for data communications

    NASA Technical Reports Server (NTRS)

    Houts, R. C.; Barksdale, W. J.

    1974-01-01

    A theory is developed for a digital equalizer for use in reducing intersymbol interference (ISI) on high speed data communications channels. The equalizer is initialized with a single isolated transmitter pulse, provided the signal-to-noise ratio (SNR) is not unusually low, then switches to a decision directed, on-line mode of operation that allows tracking of channel variations. Conditions for optimal tap-gain settings are obtained first for a transversal equalizer structure by using a mean squared error (MSE) criterion, a first order gradient algorithm to determine the adjustable equalizer tap-gains, and a sequence of isolated initializing pulses. Since the rate of tap-gain convergence depends on the eigenvalues of a channel output correlation matrix, convergence can be improved by making a linear transformation on to obtain a new correlation matrix.

  6. Application of back-propagation artificial neural network (ANN) to predict crystallite size and band gap energy of ZnO quantum dots

    NASA Astrophysics Data System (ADS)

    Pelicano, Christian Mark; Rapadas, Nick; Cagatan, Gerard; Magdaluyo, Eduardo

    2017-12-01

    Herein, the crystallite size and band gap energy of zinc oxide (ZnO) quantum dots were predicted using artificial neural network (ANN). Three input factors including reagent ratio, growth time, and growth temperature were examined with respect to crystallite size and band gap energy as response factors. The generated results from neural network model were then compared with the experimental results. Experimental crystallite size and band gap energy of ZnO quantum dots were measured from TEM images and absorbance spectra, respectively. The Levenberg-Marquardt (LM) algorithm was used as the learning algorithm for the ANN model. The performance of the ANN model was then assessed through mean square error (MSE) and regression values. Based on the results, the ANN modelling results are in good agreement with the experimental data.

  7. Erreurs grammaticales: Comment s'entrainer a les depister (Grammatical Errors: Learning How to Track Them Down).

    ERIC Educational Resources Information Center

    Straalen-Sanderse, Wilma van; And Others

    1986-01-01

    Following an experiment which revealed that production of grammatically correct sentences and correction of grammatically problematic sentences in French are essentially different skills, a progressive training method for finding and correcting grammatical errors was developed. (MSE)

  8. Refractive Status at Birth: Its Relation to Newborn Physical Parameters at Birth and Gestational Age

    PubMed Central

    Varghese, Raji Mathew; Sreenivas, Vishnubhatla; Puliyel, Jacob Mammen; Varughese, Sara

    2009-01-01

    Background Refractive status at birth is related to gestational age. Preterm babies have myopia which decreases as gestational age increases and term babies are known to be hypermetropic. This study looked at the correlation of refractive status with birth weight in term and preterm babies, and with physical indicators of intra-uterine growth such as the head circumference and length of the baby at birth. Methods All babies delivered at St. Stephens Hospital and admitted in the nursery were eligible for the study. Refraction was performed within the first week of life. 0.8% tropicamide with 0.5% phenylephrine was used to achieve cycloplegia and paralysis of accommodation. 599 newborn babies participated in the study. Data pertaining to the right eye is utilized for all the analyses except that for anisometropia where the two eyes were compared. Growth parameters were measured soon after birth. Simple linear regression analysis was performed to see the association of refractive status, (mean spherical equivalent (MSE), astigmatism and anisometropia) with each of the study variables, namely gestation, length, weight and head circumference. Subsequently, multiple linear regression was carried out to identify the independent predictors for each of the outcome parameters. Results Simple linear regression showed a significant relation between all 4 study variables and refractive error but in multiple regression only gestational age and weight were related to refractive error. The partial correlation of weight with MSE adjusted for gestation was 0.28 and that of gestation with MSE adjusted for weight was 0.10. Birth weight had a higher correlation to MSE than gestational age. Conclusion This is the first study to look at refractive error against all these growth parameters, in preterm and term babies at birth. It would appear from this study that birth weight rather than gestation should be used as criteria for screening for refractive error, especially in developing countries where the incidence of intrauterine malnutrition is higher. PMID:19214228

  9. A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data

    PubMed Central

    Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P.; Engel, Lawrence S.; Kwok, Richard K.; Blair, Aaron; Stewart, Patricia A.

    2016-01-01

    Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method’s performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. PMID:26209598

  10. Optimization of Operating Parameters for Minimum Mechanical Specific Energy in Drilling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamrick, Todd

    2011-01-01

    Efficiency in drilling is measured by Mechanical Specific Energy (MSE). MSE is the measure of the amount of energy input required to remove a unit volume of rock, expressed in units of energy input divided by volume removed. It can be expressed mathematically in terms of controllable parameters; Weight on Bit, Torque, Rate of Penetration, and RPM. It is well documented that minimizing MSE by optimizing controllable factors results in maximum Rate of Penetration. Current methods for computing MSE make it possible to minimize MSE in the field only through a trial-and-error process. This work makes it possible to computemore » the optimum drilling parameters that result in minimum MSE. The parameters that have been traditionally used to compute MSE are interdependent. Mathematical relationships between the parameters were established, and the conventional MSE equation was rewritten in terms of a single parameter, Weight on Bit, establishing a form that can be minimized mathematically. Once the optimum Weight on Bit was determined, the interdependent relationship that Weight on Bit has with Torque and Penetration per Revolution was used to determine optimum values for those parameters for a given drilling situation. The improved method was validated through laboratory experimentation and analysis of published data. Two rock types were subjected to four treatments each, and drilled in a controlled laboratory environment. The method was applied in each case, and the optimum parameters for minimum MSE were computed. The method demonstrated an accurate means to determine optimum drilling parameters of Weight on Bit, Torque, and Penetration per Revolution. A unique application of micro-cracking is also presented, which demonstrates that rock failure ahead of the bit is related to axial force more than to rotation speed.« less

  11. Advances in the meta-analysis of heterogeneous clinical trials II: The quality effects model.

    PubMed

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines the performance of the updated quality effects (QE) estimator for meta-analysis of heterogeneous studies. It is shown that this approach leads to a decreased mean squared error (MSE) of the estimator while maintaining the nominal level of coverage probability of the confidence interval. Extensive simulation studies confirm that this approach leads to the maintenance of the correct coverage probability of the confidence interval, regardless of the level of heterogeneity, as well as a lower observed variance compared to the random effects (RE) model. The QE model is robust to subjectivity in quality assessment down to completely random entry, in which case its MSE equals that of the RE estimator. When the proposed QE method is applied to a meta-analysis of magnesium for myocardial infarction data, the pooled mortality odds ratio (OR) becomes 0.81 (95% CI 0.61-1.08) which favors the larger studies but also reflects the increased uncertainty around the pooled estimate. In comparison, under the RE model, the pooled mortality OR is 0.71 (95% CI 0.57-0.89) which is less conservative than that of the QE results. The new estimation method has been implemented into the free meta-analysis software MetaXL which allows comparison of alternative estimators and can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    NASA Astrophysics Data System (ADS)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  13. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  14. Numeric model to predict the location of market demand and economic order quantity for retailers of supply chain

    NASA Astrophysics Data System (ADS)

    Fradinata, Edy; Marli Kesuma, Zurnila

    2018-05-01

    Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.

  15. Creation of a Digital Surface Model and Extraction of Coarse Woody Debris from Terrestrial Laser Scans in an Open Eucalypt Woodland

    NASA Astrophysics Data System (ADS)

    Muir, J.; Phinn, S. R.; Armston, J.; Scarth, P.; Eyre, T.

    2014-12-01

    Coarse woody debris (CWD) provides important habitat for many species and plays a vital role in nutrient cycling within an ecosystem. In addition, CWD makes an important contribution to forest biomass and fuel loads. Airborne or space based remote sensing instruments typically do not detect CWD beneath the forest canopy. Terrestrial laser scanning (TLS) provides a ground based method for three-dimensional (3-D) reconstruction of surface features and CWD. This research produced a 3-D reconstruction of the ground surface and automatically classified coarse woody debris from registered TLS scans. The outputs will be used to inform the development of a site-based index for the assessment of forest condition, and quantitative assessments of biomass and fuel loads. A survey grade terrestrial laser scanner (Riegl VZ400) was used to scan 13 positions, in an open eucalypt woodland site at Karawatha Forest Park, near Brisbane, Australia. Scans were registered, and a digital surface model (DSM) produced using an intensity threshold and an iterative morphological filter. The DSMs produced from single scans were compared to the registered multi-scan point cloud using standard error metrics including: Root Mean Squared Error (RMSE), Mean Squared Error (MSE), range, absolute error and signed error. In addition the DSM was compared to a Digital Elevation Model (DEM) produced from Airborne Laser Scanning (ALS). Coarse woody debris was subsequently classified from the DSM using laser pulse properties, including: width and amplitude, as well as point spatial relationships (e.g. nearest neighbour slope vectors). Validation of the coarse woody debris classification was completed using true-colour photographs co-registered to the TLS point cloud. The volume and length of the coarse woody debris was calculated from the classified point cloud. A representative network of TLS sites will allow for up-scaling to large area assessment using airborne or space based sensors to monitor forest condition, biomass and fuel loads.

  16. Optical pattern recognition architecture implementing the mean-square error correlation algorithm

    DOEpatents

    Molley, Perry A.

    1991-01-01

    An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.

  17. Segmentation of ECG from Surface EMG Using DWT and EMD: A Comparison Study

    NASA Astrophysics Data System (ADS)

    Shahbakhti, Mohammad; Heydari, Elnaz; Luu, Gia Thien

    2014-10-01

    The electrocardiographic (ECG) signal is a major artifact during recording the surface electromyography (SEMG). Removal of this artifact is one of the important tasks before SEMG analysis for biomedical goals. In this paper, the application of discrete wavelet transform (DWT) and empirical mode decomposition (EMD) for elimination of ECG artifact from SEMG is investigated. The focus of this research is to reach the optimized number of decomposed levels using mean power frequency (MPF) by both techniques. In order to implement the proposed methods, ten simulated and three real ECG contaminated SEMG signals have been tested. Signal-to-noise ratio (SNR) and mean square error (MSE) between the filtered and the pure signals are applied as the performance indexes of this research. The obtained results suggest both techniques could remove ECG artifact from SEMG signals fair enough, however, DWT performs much better and faster in real data.

  18. Outcome modelling strategies in epidemiology: traditional methods and basic alternatives

    PubMed Central

    Greenland, Sander; Daniel, Rhian; Pearce, Neil

    2016-01-01

    Abstract Controlling for too many potential confounders can lead to or aggravate problems of data sparsity or multicollinearity, particularly when the number of covariates is large in relation to the study size. As a result, methods to reduce the number of modelled covariates are often deployed. We review several traditional modelling strategies, including stepwise regression and the ‘change-in-estimate’ (CIE) approach to deciding which potential confounders to include in an outcome-regression model for estimating effects of a targeted exposure. We discuss their shortcomings, and then provide some basic alternatives and refinements that do not require special macros or programming. Throughout, we assume the main goal is to derive the most accurate effect estimates obtainable from the data and commercial software. Allowing that most users must stay within standard software packages, this goal can be roughly approximated using basic methods to assess, and thereby minimize, mean squared error (MSE). PMID:27097747

  19. Estimating monthly streamflow values by cokriging

    USGS Publications Warehouse

    Solow, A.R.; Gorelick, S.M.

    1986-01-01

    Cokriging is applied to estimation of missing monthly streamflow values in three records from gaging stations in west central Virginia. Missing values are estimated from optimal consideration of the pattern of auto- and cross-correlation among standardized residual log-flow records. Investigation of the sensitivity of estimation to data configuration showed that when observations are available within two months of a missing value, estimation is improved by accounting for correlation. Concurrent and lag-one observations tend to screen the influence of other available observations. Three models of covariance structure in residual log-flow records are compared using cross-validation. Models differ in how much monthly variation they allow in covariance. Precision of estimation, reflected in mean squared error (MSE), proved to be insensitive to this choice. Cross-validation is suggested as a tool for choosing an inverse transformation when an initial nonlinear transformation is applied to flow values. ?? 1986 Plenum Publishing Corporation.

  20. Stego on FPGA: An IWT Approach

    PubMed Central

    Ramalingam, Balakrishnan

    2014-01-01

    A reconfigurable hardware architecture for the implementation of integer wavelet transform (IWT) based adaptive random image steganography algorithm is proposed. The Haar-IWT was used to separate the subbands namely, LL, LH, HL, and HH, from 8 × 8 pixel blocks and the encrypted secret data is hidden in the LH, HL, and HH blocks using Moore and Hilbert space filling curve (SFC) scan patterns. Either Moore or Hilbert SFC was chosen for hiding the encrypted data in LH, HL, and HH coefficients, whichever produces the lowest mean square error (MSE) and the highest peak signal-to-noise ratio (PSNR). The fixated random walk's verdict of all blocks is registered which is nothing but the furtive key. Our system took 1.6 µs for embedding the data in coefficient blocks and consumed 34% of the logic elements, 22% of the dedicated logic register, and 2% of the embedded multiplier on Cyclone II field programmable gate array (FPGA). PMID:24723794

  1. Using local multiplicity to improve effect estimation from a hypothesis-generating pharmacogenetics study.

    PubMed

    Zou, W; Ouyang, H

    2016-02-01

    We propose a multiple estimation adjustment (MEA) method to correct effect overestimation due to selection bias from a hypothesis-generating study (HGS) in pharmacogenetics. MEA uses a hierarchical Bayesian approach to model individual effect estimates from maximal likelihood estimation (MLE) in a region jointly and shrinks them toward the regional effect. Unlike many methods that model a fixed selection scheme, MEA capitalizes on local multiplicity independent of selection. We compared mean square errors (MSEs) in simulated HGSs from naive MLE, MEA and a conditional likelihood adjustment (CLA) method that model threshold selection bias. We observed that MEA effectively reduced MSE from MLE on null effects with or without selection, and had a clear advantage over CLA on extreme MLE estimates from null effects under lenient threshold selection in small samples, which are common among 'top' associations from a pharmacogenetics HGS.

  2. Development of a Voice Activity Controlled Noise Canceller

    PubMed Central

    Abid Noor, Ali O.; Samad, Salina Abdul; Hussain, Aini

    2012-01-01

    In this paper, a variable threshold voice activity detector (VAD) is developed to control the operation of a two-sensor adaptive noise canceller (ANC). The VAD prohibits the reference input of the ANC from containing some strength of actual speech signal during adaptation periods. The novelty of this approach resides in using the residual output from the noise canceller to control the decisions made by the VAD. Thresholds of full-band energy and zero-crossing features are adjusted according to the residual output of the adaptive filter. Performance evaluation of the proposed approach is quoted in terms of signal to noise ratio improvements as well mean square error (MSE) convergence of the ANC. The new approach showed an improved noise cancellation performance when tested under several types of environmental noise. Furthermore, the computational power of the adaptive process is reduced since the output of the adaptive filter is efficiently calculated only during non-speech periods. PMID:22778667

  3. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  4. Application of Artificial Neural Network and Response Surface Methodology in Modeling of Surface Roughness in WS2 Solid Lubricant Assisted MQL Turning of Inconel 718

    NASA Astrophysics Data System (ADS)

    Maheshwera Reddy Paturi, Uma; Devarasetti, Harish; Abimbola Fadare, David; Reddy Narala, Suresh Kumar

    2018-04-01

    In the present paper, the artificial neural network (ANN) and response surface methodology (RSM) are used in modeling of surface roughness in WS2 (tungsten disulphide) solid lubricant assisted minimal quantity lubrication (MQL) machining. The real time MQL turning of Inconel 718 experimental data considered in this paper was available in the literature [1]. In ANN modeling, performance parameters such as mean square error (MSE), mean absolute percentage error (MAPE) and average error in prediction (AEP) for the experimental data were determined based on Levenberg–Marquardt (LM) feed forward back propagation training algorithm with tansig as transfer function. The MATLAB tool box has been utilized in training and testing of neural network model. Neural network model with three input neurons, one hidden layer with five neurons and one output neuron (3-5-1 architecture) is found to be most confidence and optimal. The coefficient of determination (R2) for both the ANN and RSM model were seen to be 0.998 and 0.982 respectively. The surface roughness predictions from ANN and RSM model were related with experimentally measured values and found to be in good agreement with each other. However, the prediction efficacy of ANN model is relatively high when compared with RSM model predictions.

  5. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method

    NASA Astrophysics Data System (ADS)

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-01

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.

  6. Interruption Practice Reduces Errors

    DTIC Science & Technology

    2014-01-01

    dangers of errors at the PCS. Electronic health record systems are used to reduce certain errors related to poor- handwriting and dosage...10.16, MSE =.31, p< .05, η2 = .18 A significant interaction between the number of interruptions and interrupted trials suggests that trials...the variance when calculating whether a memory has a higher signal than interference. If something in addition to activation contributes to goal

  7. Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd

    In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parametermore » estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.« less

  8. Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI

    NASA Astrophysics Data System (ADS)

    Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd

    2015-02-01

    In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.

  9. Effects of prior short multiple-sprint exercises with different intersprint recoveries on the slow component of oxygen uptake during high-intensity exercise.

    PubMed

    Lanzi, Stefano; Borrani, Fabio; Wolf, Martin; Gojanovic, Boris; Malatesta, Davide

    2012-12-01

    This study compares the effects of two short multiple-sprint exercise (MSE) (6 × 6 s) sessions with two different recovery durations (30 s or 180 s) on the slow component of oxygen uptake ([Formula: see text]O(2)) during subsequent high-intensity exercise. Ten male subjects performed a 6-min cycling test at 50% of the difference between the gas exchange threshold and [Formula: see text]O(2peak) (Δ50). Then, the subjects performed two MSEs of 6 × 6 s separated by two intersprint recoveries of 30 s (MSE(30)) and 180 s (MSE(180)), followed 10 min later by the Δ50 (Δ50(30) and Δ50(180), respectively). Electromyography (EMG) activities of the vastus medialis and lateralis were measured throughout each exercise bout. During MSE(30), muscle activity (root mean square) increased significantly (p ≤ 0.04), with a significant leftward-shifted median frequency of the power density spectrum (MDF; p ≤ 0.01), whereas MDF was significantly rightward-shifted during MSE(180) (p = 0.02). The mean [Formula: see text]O(2) value was significantly higher in MSE(30) than in MSE(180) (p < 0.001). During Δ50(30), [Formula: see text]O(2) and the deoxygenated hemoglobin ([HHb]) slow components were significantly reduced (-27%, p = 0.02, and -34%, p = 0.003, respectively) compared with Δ50. There were no significant modifications of the [Formula: see text]O(2) slow component in Δ50(180) compared with Δ50 (p = 0.32). The neuromuscular and metabolic adaptations during MSE(30) (preferential activation of type I muscle fibers evidenced by decreased MDF and a greater aerobic metabolism contribution to the required energy demands), but not during MSE(180), may lead to reduced [Formula: see text]O(2) and [HHb] slow components, suggesting an alteration in motor units recruitment profile (i.e., change in the type of muscle fibers recruited) and (or) an improved muscle O(2) delivery during subsequent exercise.

  10. SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.

    PubMed

    Nik, S J; Thing, R S; Watts, R; Meyer, J

    2012-06-01

    To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.

  11. Robust estimation of event-related potentials via particle filter.

    PubMed

    Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito

    2016-03-01

    In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Modeling the control of the central nervous system over the cardiovascular system using support vector machines.

    PubMed

    Díaz, José; Acosta, Jesús; González, Rafael; Cota, Juan; Sifuentes, Ernesto; Nebot, Àngela

    2018-02-01

    The control of the central nervous system (CNS) over the cardiovascular system (CS) has been modeled using different techniques, such as fuzzy inductive reasoning, genetic fuzzy systems, neural networks, and nonlinear autoregressive techniques; the results obtained so far have been significant, but not solid enough to describe the control response of the CNS over the CS. In this research, support vector machines (SVMs) are used to predict the response of a branch of the CNS, specifically, the one that controls an important part of the cardiovascular system. To do this, five models are developed to emulate the output response of five controllers for the same input signal, the carotid sinus blood pressure (CSBP). These controllers regulate parameters such as heart rate, myocardial contractility, peripheral and coronary resistance, and venous tone. The models are trained using a known set of input-output response in each controller; also, there is a set of six input-output signals for testing each proposed model. The input signals are processed using an all-pass filter, and the accuracy performance of the control models is evaluated using the percentage value of the normalized mean square error (MSE). Experimental results reveal that SVM models achieve a better estimation of the dynamical behavior of the CNS control compared to others modeling systems. The main results obtained show that the best case is for the peripheral resistance controller, with a MSE of 1.20e-4%, while the worst case is for the heart rate controller, with a MSE of 1.80e-3%. These novel models show a great reliability in fitting the output response of the CNS which can be used as an input to the hemodynamic system models in order to predict the behavior of the heart and blood vessels in response to blood pressure variations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Use of raw or incubated organic wastes as amendments in reducing pesticide leaching through soil columns.

    PubMed

    Marín-Benito, J M; Brown, C D; Herrero-Hernández, E; Arienzo, M; Sánchez-Martín, M J; Rodríguez-Cruz, M S

    2013-10-01

    Soil amendment with organic wastes is becoming a widespread management practice since it can effectively solve the problems of uncontrolled waste accumulation and improve soil quality. However, when simultaneously applied with pesticides, organic wastes can significantly modify the environmental behaviour of these compounds. This study evaluated the effect of sewage sludges (SS), grape marc (GM) and spent mushroom substrates (SMS) on the leaching of linuron, diazinon and myclobutanil in packed columns of a sandy soil with low organic matter (OM) content (<1%). Soil plus amendments had been incubated for one month (1 m) or 12 months (12 m). Data from the experimental breakthrough curves (BTCs) were fitted to the one-dimensional transport model CXTFIT 2.1. All three amendments reduced leaching of linuron and myclobutanil relative to unamended soil. SMS was the most effective in reducing leaching of these two compounds independent of whether soil was incubated for 1 m or 12 m. Soil amendments increased retardation coefficients (Rexp) by factors of 3 to 5 for linuron, 2 to 4 for diazinon and 3 to 5 for myclobutanil relative to unamended soil. Leaching of diazinon was relatively little affected by soil amendment compared to the other two compounds and both SS and SMS amendment with 1m incubation resulted in enhanced leaching of diazinon. The leaching data for linuron and myclobutanil were well described by CXTFIT (mean square error, MSE<4.9·10(-7) and MSE<7.0·10(-7), respectively) whereas those of diazinon were less well fitted (MSE<2.1·10(-6)). The BTCs for pesticides were similar in soils incubated for one month or one year, indicating that the effect of amendment on leaching persists over relatively long periods of time. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Modelling Schumann resonances from ELF measurements using non-linear optimization methods

    NASA Astrophysics Data System (ADS)

    Castro, Francisco; Toledo-Redondo, Sergio; Fornieles, Jesús; Salinas, Alfonso; Portí, Jorge; Navarro, Enrique; Sierra, Pablo

    2017-04-01

    Schumann resonances (SR) can be found in planetary atmospheres, inside the cavity formed by the conducting surface of the planet and the lower ionosphere. They are a powerful tool to investigate both the electric processes that occur in the atmosphere and the characteristics of the surface and the lower ionosphere. In this study, the measurements are obtained in the ELF (Extremely Low Frequency) Juan Antonio Morente station located in the national park of Sierra Nevada. The three first modes, contained in the frequency band between 6 to 25 Hz, will be considered. For each time series recorded by the station, the amplitude spectrum was estimated by using Bartlett averaging. Then, the central frequencies and amplitudes of the SRs were obtained by fitting the spectrum with non-linear functions. In the poster, a study of nonlinear unconstrained optimization methods applied to the estimation of the Schumann Resonances will be presented. Non-linear fit, also known as optimization process, is the procedure followed in obtaining Schumann Resonances from the natural electromagnetic noise. The optimization methods that have been analysed are: Levenberg-Marquardt, Conjugate Gradient, Gradient, Newton and Quasi-Newton. The functions that the different methods fit to data are three lorentzian curves plus a straight line. Gaussian curves have also been considered. The conclusions of this study are outlined in the following paragraphs: i) Natural electromagnetic noise is better fitted using Lorentzian functions; ii) the measurement bandwidth can accelerate the convergence of the optimization method; iii) Gradient method has less convergence and has a highest mean squared error (MSE) between measurement and the fitted function, whereas Levenberg-Marquad, Gradient conjugate method and Cuasi-Newton method give similar results (Newton method presents higher MSE); v) There are differences in the MSE between the parameters that define the fit function, and an interval from 1% to 5% has been found.

  15. Temporal subtraction contrast-enhanced dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.

    2016-09-01

    The development of a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, intensity difference adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using normalized cross correlation (NCC), symmetric uncertainty coefficient, normalized mutual information (NMI), mean square error (MSE) and target registration error (TRE). The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE (0-16%), NCC (0-6%), NMI (0-13%) and TRE (0-34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake.

  16. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms

    PubMed Central

    Vázquez, Roberto A.

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems. PMID:26221132

  17. Spatio-temporal alignment of pedobarographic image sequences.

    PubMed

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2011-07-01

    This article presents a methodology to align plantar pressure image sequences simultaneously in time and space. The spatial position and orientation of a foot in a sequence are changed to match the foot represented in a second sequence. Simultaneously with the spatial alignment, the temporal scale of the first sequence is transformed with the aim of synchronizing the two input footsteps. Consequently, the spatial correspondence of the foot regions along the sequences as well as the temporal synchronizing is automatically attained, making the study easier and more straightforward. In terms of spatial alignment, the methodology can use one of four possible geometric transformation models: rigid, similarity, affine, or projective. In the temporal alignment, a polynomial transformation up to the 4th degree can be adopted in order to model linear and curved time behaviors. Suitable geometric and temporal transformations are found by minimizing the mean squared error (MSE) between the input sequences. The methodology was tested on a set of real image sequences acquired from a common pedobarographic device. When used in experimental cases generated by applying geometric and temporal control transformations, the methodology revealed high accuracy. In addition, the intra-subject alignment tests from real plantar pressure image sequences showed that the curved temporal models produced better MSE results (P < 0.001) than the linear temporal model. This article represents an important step forward in the alignment of pedobarographic image data, since previous methods can only be applied on static images.

  18. Nowcasting of rainfall and of combined sewage flow in urban drainage systems.

    PubMed

    Achleitner, Stefan; Fach, Stefan; Einfalt, Thomas; Rauch, Wolfgang

    2009-01-01

    Nowcasting of rainfall may be used additionally to online rain measurements to optimize the operation of urban drainage systems. Uncertainties quoted for the rain volume are in the range of 5% to 10% mean square error (MSE), where for rain intensities 45% to 75% MSE are noted. For larger forecast periods up to 3 hours, the uncertainties will increase up to some hundred percents. Combined with the growing number of real time control concepts in sewer systems, rainfall forecast is used more and more in urban drainage systems. Therefore it is of interest how the uncertainties influence the final evaluation of a defined objective function. Uncertainty levels associated with the forecast itself are not necessarily transferable to resulting uncertainties in the catchment's flow dynamics. The aim of this paper is to analyse forecasts of rainfall and specific sewer output variables. For this study the combined sewer system of the city of Linz in the northern part of Austria located on the Danube has been selected. The city itself represents a total area of 96 km2 with 39 municipalities connected. It was found that the available weather radar data leads to large deviations in the forecast for precipitation at forecast horizons larger than 90 minutes. The same is true for sewer variables such a CSO overflow for small sub-catchments. Although the results improve for larger spatial scales, acceptable levels at forecast horizons larger than 90 minutes are not reached.

  19. Feasibility of predicting tumor motion using online data acquired during treatment and a generalized neural network optimized with offline patient tumor trajectories.

    PubMed

    Teo, Troy P; Ahmed, Syed Bilal; Kawalec, Philip; Alayoubi, Nadia; Bruce, Neil; Lyn, Ethan; Pistorius, Stephen

    2018-02-01

    The accurate prediction of intrafraction lung tumor motion is required to compensate for system latency in image-guided adaptive radiotherapy systems. The goal of this study was to identify an optimal prediction model that has a short learning period so that prediction and adaptation can commence soon after treatment begins, and requires minimal reoptimization for individual patients. Specifically, the feasibility of predicting tumor position using a combination of a generalized (i.e., averaged) neural network, optimized using historical patient data (i.e., tumor trajectories) obtained offline, coupled with the use of real-time online tumor positions (obtained during treatment delivery) was examined. A 3-layer perceptron neural network was implemented to predict tumor motion for a prediction horizon of 650 ms. A backpropagation algorithm and batch gradient descent approach were used to train the model. Twenty-seven 1-min lung tumor motion samples (selected from a CyberKnife patient dataset) were sampled at a rate of 7.5 Hz (0.133 s) to emulate the frame rate of an electronic portal imaging device (EPID). A sliding temporal window was used to sample the data for learning. The sliding window length was set to be equivalent to the first breathing cycle detected from each trajectory. Performing a parametric sweep, an averaged error surface of mean square errors (MSE) was obtained from the prediction responses of seven trajectories used for the training of the model (Group 1). An optimal input data size and number of hidden neurons were selected to represent the generalized model. To evaluate the prediction performance of the generalized model on unseen data, twenty tumor traces (Group 2) that were not involved in the training of the model were used for the leave-one-out cross-validation purposes. An input data size of 35 samples (4.6 s) and 20 hidden neurons were selected for the generalized neural network. An average sliding window length of 28 data samples was used. The average initial learning period prior to the availability of the first predicted tumor position was 8.53 ± 1.03 s. Average mean absolute error (MAE) of 0.59 ± 0.13 mm and 0.56 ± 0.18 mm were obtained from Groups 1 and 2, respectively, giving an overall MAE of 0.57 ± 0.17 mm. Average root-mean-square-error (RMSE) of 0.67 ± 0.36 for all the traces (0.76 ± 0.34 mm, Group 1 and 0.63 ± 0.36 mm, Group 2), is comparable to previously published results. Prediction errors are mainly due to the irregular periodicities between cycles. Since the errors from Groups 1 and 2 are within the same range, it demonstrates that this model can generalize and predict on unseen data. This is a first attempt to use an averaged MSE error surface (obtained from the prediction of different patients' tumor trajectories) to determine the parameters of a generalized neural network. This network could be deployed as a plug-and-play predictor for tumor trajectory during treatment delivery, eliminating the need for optimizing individual networks with pretreatment patient data. © 2017 American Association of Physicists in Medicine.

  20. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, Hong Yi; Milne, Alice; Webster, Richard

    2016-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  1. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, HongYi; Milne, Alice; Webster, Richard

    2015-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  2. A Comparison of the β-Substitution Method and a Bayesian Method for Analyzing Left-Censored Data.

    PubMed

    Huynh, Tran; Quick, Harrison; Ramachandran, Gurumurthy; Banerjee, Sudipto; Stenzel, Mark; Sandler, Dale P; Engel, Lawrence S; Kwok, Richard K; Blair, Aaron; Stewart, Patricia A

    2016-01-01

    Classical statistical methods for analyzing exposure data with values below the detection limits are well described in the occupational hygiene literature, but an evaluation of a Bayesian approach for handling such data is currently lacking. Here, we first describe a Bayesian framework for analyzing censored data. We then present the results of a simulation study conducted to compare the β-substitution method with a Bayesian method for exposure datasets drawn from lognormal distributions and mixed lognormal distributions with varying sample sizes, geometric standard deviations (GSDs), and censoring for single and multiple limits of detection. For each set of factors, estimates for the arithmetic mean (AM), geometric mean, GSD, and the 95th percentile (X0.95) of the exposure distribution were obtained. We evaluated the performance of each method using relative bias, the root mean squared error (rMSE), and coverage (the proportion of the computed 95% uncertainty intervals containing the true value). The Bayesian method using non-informative priors and the β-substitution method were generally comparable in bias and rMSE when estimating the AM and GM. For the GSD and the 95th percentile, the Bayesian method with non-informative priors was more biased and had a higher rMSE than the β-substitution method, but use of more informative priors generally improved the Bayesian method's performance, making both the bias and the rMSE more comparable to the β-substitution method. An advantage of the Bayesian method is that it provided estimates of uncertainty for these parameters of interest and good coverage, whereas the β-substitution method only provided estimates of uncertainty for the AM, and coverage was not as consistent. Selection of one or the other method depends on the needs of the practitioner, the availability of prior information, and the distribution characteristics of the measurement data. We suggest the use of Bayesian methods if the practitioner has the computational resources and prior information, as the method would generally provide accurate estimates and also provides the distributions of all of the parameters, which could be useful for making decisions in some applications. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  3. OCR enhancement through neighbor embedding and fast approximate nearest neighbors

    NASA Astrophysics Data System (ADS)

    Smith, D. C.

    2012-10-01

    Generic optical character recognition (OCR) engines often perform very poorly in transcribing scanned low resolution (LR) text documents. To improve OCR performance, we apply the Neighbor Embedding (NE) single-image super-resolution (SISR) technique to LR scanned text documents to obtain high resolution (HR) versions, which we subsequently process with OCR. For comparison, we repeat this procedure using bicubic interpolation (BI). We demonstrate that mean-square errors (MSE) in NE HR estimates do not increase substantially when NE is trained in one Latin font style and tested in another, provided both styles belong to the same font category (serif or sans serif). This is very important in practice, since for each font size, the number of training sets required for each category may be reduced from dozens to just one. We also incorporate randomized k-d trees into our NE implementation to perform approximate nearest neighbor search, and obtain a 1000x speed up of our original NE implementation, with negligible MSE degradation. This acceleration also made it practical to combine all of our size-specific NE Latin models into a single Universal Latin Model (ULM). The ULM eliminates the need to determine the unknown font category and size of an input LR text document and match it to an appropriate model, a very challenging task, since the dpi (pixels per inch) of the input LR image is generally unknown. Our experiments show that OCR character error rates (CER) were over 90% when we applied the Tesseract OCR engine to LR text documents (scanned at 75 dpi and 100 dpi) in the 6-10 pt range. By contrast, using k-d trees and the ULM, CER after NE preprocessing averaged less than 7% at 3x (100 dpi LR scanning) and 4x (75 dpi LR scanning) magnification, over an order of magnitude improvement. Moreover, CER after NE preprocessing was more that 6 times lower on average than after BI preprocessing.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youkhana, Adel H.; Ogoshi, Richard M.; Kiniry, James R.

    Biomass is a promising renewable energy option that provides a more environmentally sustainable alternative to fossil resources by reducing the net flux of greenhouse gasses to the atmosphere. Yet, allometric models that allow the prediction of aboveground biomass (AGB), biomass carbon (C) stock non-destructively have not yet been developed for tropical perennial C 4 grasses currently under consideration as potential bioenergy feedstock in Hawaii and other subtropical and tropical locations. The objectives of this study were to develop optimal allometric relationships and site-specific models to predict AGB, biomass C stock of napiergrass, energycane, and sugarcane under cultivation practices for renewablemore » energy and validate these site-specific models against independent data sets generated from sites with widely different environments. Several allometric models were developed for each species from data at a low elevation field on the island of Maui, Hawaii. A simple power model with stalk diameter (D) was best related to AGB and biomass C stock for napiergrass, energycane, and sugarcane, (R 2 = 0.98, 0.96, and 0.97, respectively). The models were then tested against data collected from independent fields across an environmental gradient. For all crops, the models over-predicted AGB in plants with lower stalk D, but AGB was under-predicted in plants with higher stalk D. The models using stalk D were better for biomass prediction compared to dewlap H (Height from the base cut to most recently exposed leaf dewlap) models, which showed weak validation performance. Although stalk D model performed better, however, the mean square error (MSE)-systematic was ranged from 23 to 43 % of MSE for all crops. A strong relationship between model coefficient and rainfall was existed, although these were irrigated systems; suggesting a simple site-specific coefficient modulator for rainfall to reduce systematic errors in water-limited areas. These allometric equations provide a tool for farmers in the tropics to estimate perennial C4 grass biomass and C stock during decision-making for land management and as an environmental sustainability indicator within a renewable energy system.« less

  5. WE-AB-207A-02: John’s Equation Based Consistency Condition and Incomplete Projection Restoration Upon Circular Orbit CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, J; Qi, H; Wu, S

    Purpose: In transmitted X-ray tomography imaging, projections are sometimes incomplete due to a variety of reasons, such as geometry inaccuracy, defective detector cells, etc. To address this issue, we have derived a direct consistency condition based on John’s Equation, and proposed a method to effectively restore incomplete projections based on this consistency condition. Methods: Through parameter substitutions, we have derived a direct consistency condition equation from John’s equation, in which the left side is only projection derivative of view and the right side is projection derivative of other geometrical parameters. Based on this consistency condition, a projection restoration method ismore » proposed, which includes five steps: 1) Forward projecting reconstructed image and using linear interpolation to estimate the incomplete projections as the initial result; 2) Performing Fourier transform on the projections; 3) Restoring the incomplete frequency data using the consistency condition equation; 4) Performing inverse Fourier transform; 5) Repeat step 2)∼4) until our criteria is met to terminate the iteration. Results: A beam-blocking-based scatter correction case and a bad-pixel correction case were used to demonstrate the efficacy and robustness of our restoration method. The mean absolute error (MAE), signal noise ratio (SNR) and mean square error (MSE) were employed as our evaluation metrics of the reconstructed images. For the scatter correction case, the MAE is reduced from 63.3% to 71.7% with 4 iterations. Compared with the existing Patch’s method, the MAE of our method is further reduced by 8.72%. For the bad-pixel case, the SNR of the reconstructed image by our method is increased from 13.49% to 21.48%, with the MSE being decreased by 45.95%, compared with linear interpolation method. Conclusion: Our studies have demonstrated that our restoration method based on the new consistency condition could effectively restore the incomplete projections, especially for their high frequency component.« less

  6. The University and the Strategic Defense Initiative.

    ERIC Educational Resources Information Center

    Winn, Ira J.

    1987-01-01

    Under full scrutiny, the Strategic Defense Initiative program is revealed as a form of escapism from global realities, with dangerous and destabilizing themes for both the university and society. Universities must face this issue squarely, and limit their focus to activities best suited to an intellectually constructive and humane purpose. (MSE)

  7. Central obesity, leptin and cognitive decline: the Sacramento Area Latino Study on Aging.

    PubMed

    Zeki Al Hazzouri, Adina; Haan, Mary N; Whitmer, Rachel A; Yaffe, Kristine; Neuhaus, John

    2012-01-01

    Central obesity is a risk factor for cognitive decline. Leptin is secreted by adipose tissue and has been associated with better cognitive function. Aging Mexican Americans have higher levels of obesity than non-Hispanic Whites, but no investigations examined the relationship between leptin and cognitive decline among them or the role of central obesity in this association. We analyzed 1,480 dementia-free older Mexican Americans who were followed over 10 years. Cognitive function was assessed every 12-15 months with the Modified Mini Mental State Exam (3MSE) and the Spanish and English Verbal Learning Test (SEVLT). For females with a small waist circumference (≤35 inches), an interquartile range difference in leptin was associated with 35% less 3MSE errors and 22% less decline in the SEVLT score over 10 years. For males with a small waist circumference (≤40 inches), an interquartile range difference in leptin was associated with 44% less 3MSE errors and 30% less decline in the SEVLT score over 10 years. There was no association between leptin and cognitive decline among females or males with a large waist circumference. Leptin interacts with central obesity in shaping cognitive decline. Our findings provide valuable information about the effects of metabolic risk factors on cognitive function. Copyright © 2012 S. Karger AG, Basel.

  8. Post-stratification sampling in small area estimation (SAE) model for unemployment rate estimation by Bayes approach

    NASA Astrophysics Data System (ADS)

    Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang

    2016-02-01

    This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.

  9. Sum of top-hat transform based algorithm for vessel enhancement in MRA images

    NASA Astrophysics Data System (ADS)

    Ouazaa, Hibet-Allah; Jlassi, Hajer; Hamrouni, Kamel

    2018-04-01

    The Magnetic Resonance Angiography (MRA) is rich with information's. But, they suffer from poor contrast, illumination and noise. Thus, it is required to enhance the images. But, these significant information can be lost if improper techniques are applied. Therefore, in this paper, we propose a new method of enhancement. We applied firstly the CLAHE method to increase the contrast of the image. Then, we applied the sum of Top-Hat Transform to increase the brightness of vessels. It is performed with the structuring element oriented in different angles. The methodology is tested and evaluated on the publicly available database BRAINIX. And, we used the measurement methods MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio) and SNR (Signal to Noise Ratio) for the evaluation. The results demonstrate that the proposed method could efficiently enhance the image details and is comparable with state of the art algorithms. Hence, the proposed method could be broadly used in various applications.

  10. Bio-oil from cassava peel: a potential renewable energy source.

    PubMed

    Ki, Ong Lu; Kurniawan, Alfin; Lin, Chun Xiang; Ju, Yi-Hsu; Ismadji, Suryadi

    2013-10-01

    In this work, liquid biofuel (bio-oil) was produced by pyrolizing cassava peel. The experiments were conducted isothermally in a fixed-bed tubular reactor at temperatures ranging from 400 to 600°C with a heating rate of 20°C/min. The chemical compositions of bio-oil were analyzed by a gas chromatography mass spectrometry (GC-MS) technique. For the optimization of liquid product, temperature was plotted to be the most decisive factor. The maximum yield of bio-oil ca. 51.2% was obtained at 525°C and the biofuel has a gross calorific value of 27.43 MJ/kg. The kinetic-based mechanistic model fitted well with experimental yield of pyrolysis products with the mean squared error (MSE) of 13.37 (R(2)=0.96) for solid (char), 16.24 (R(2)=0.95) for liquid (bio-oil), and 0.49 (R(2)=0.99) for gas. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. An enhanced approach for biomedical image restoration using image fusion techniques

    NASA Astrophysics Data System (ADS)

    Karam, Ghada Sabah; Abbas, Fatma Ismail; Abood, Ziad M.; Kadhim, Kadhim K.; Karam, Nada S.

    2018-05-01

    Biomedical image is generally noisy and little blur due to the physical mechanisms of the acquisition process, so one of the common degradations in biomedical image is their noise and poor contrast. The idea of biomedical image enhancement is to improve the quality of the image for early diagnosis. In this paper we are using Wavelet Transformation to remove the Gaussian noise from biomedical images: Positron Emission Tomography (PET) image and Radiography (Radio) image, in different color spaces (RGB, HSV, YCbCr), and we perform the fusion of the denoised images resulting from the above denoising techniques using add image method. Then some quantive performance metrics such as signal -to -noise ratio (SNR), peak signal-to-noise ratio (PSNR), and Mean Square Error (MSE), etc. are computed. Since this statistical measurement helps in the assessment of fidelity and image quality. The results showed that our approach can be applied of Image types of color spaces for biomedical images.

  12. A New Strategy for ECG Baseline Wander Elimination Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Shahbakhti, Mohammad; Bagheri, Hamed; Shekarchi, Babak; Mohammadi, Somayeh; Naji, Mohsen

    2016-06-01

    Electrocardiogram (ECG) signals might be affected by various artifacts and noises that have biological and external sources. Baseline wander (BW) is a low-frequency artifact that may be caused by breathing, body movements and loose sensor contact. In this paper, a novel method based on empirical mode decomposition (EMD) for removal of baseline noise from ECG is presented. When compared to other EMD-based methods, the novelty of this research is to reach the optimized number of decomposed levels for ECG BW de-noising using mean power frequency (MPF), while the reduction of processing time is considered. To evaluate the performance of the proposed method, a fifth-order Butterworth high pass filtering (BHPF) with cut-off frequency at 0.5Hz and wavelet approach are applied. Three performance indices, signal-to-noise ratio (SNR), mean square error (MSE) and correlation coefficient (CC), between pure and filtered signals have been utilized for qualification of presented techniques. Results suggest that the EMD-based method outperforms the other filtering method.

  13. River flow modeling using artificial neural networks in Kapuas river, West Kalimantan, Indonesia

    NASA Astrophysics Data System (ADS)

    Herawati, Henny; Suripin, Suharyanto

    2017-11-01

    Kapuas River is located in the province of West Kalimantan. Kapuas river length is 1,086 km and river basin areas about 100,000 Km2. The availability of river flow data in the Long River and very wide catchments are difficult to obtain, while river flow data are essential for planning waterworks. To predict the water flow in the catchment area requires a lot of hydrology coefficient, so it is very difficult to predict and obtain results that closer to the real conditions. This paper demonstrates that artificial neural network (ANN) could be used to predict the water flow. The ANN technique can be used to predict the incidence of water discharge that occurs in the Kapuas River based on rainfall and evaporation data. With the data available to do training on the artificial neural network model is obtained mean square error (MSE) 0.00007. The river flow predictions could be carried out after the training. The results showed differences in water discharge measurement and prediction of about 4%.

  14. Hybrid context aware recommender systems

    NASA Astrophysics Data System (ADS)

    Jain, Rajshree; Tyagi, Jaya; Singh, Sandeep Kumar; Alam, Taj

    2017-10-01

    Recommender systems and context awareness is currently a vital field of research. Most hybrid recommendation systems implement content based and collaborative filtering techniques whereas this work combines context and collaborative filtering. The paper presents a hybrid context aware recommender system for books and movies that gives recommendations based on the user context as well as user or item similarity. It also addresses the issue of dimensionality reduction using weighted pre filtering based on dynamically entered user context and preference of context. This unique step helps to reduce the size of dataset for collaborative filtering. Bias subtracted collaborative filtering is used so as to consider the relative rating of a particular user and not the absolute values. Cosine similarity is used as a metric to determine the similarity between users or items. The unknown ratings are calculated and evaluated using MSE (Mean Squared Error) in test and train datasets. The overall process of recommendation has helped to personalize recommendations and give more accurate results with reduced complexity in collaborative filtering.

  15. Waveform Optimization for Target Estimation by Cognitive Radar with Multiple Antennas.

    PubMed

    Yao, Yu; Zhao, Junhui; Wu, Lenan

    2018-05-29

    A new scheme based on Kalman filtering to optimize the waveforms of an adaptive multi-antenna radar system for target impulse response (TIR) estimation is presented. This work aims to improve the performance of TIR estimation by making use of the temporal correlation between successive received signals, and minimize the mean square error (MSE) of TIR estimation. The waveform design approach is based upon constant learning from the target feature at the receiver. Under the multiple antennas scenario, a dynamic feedback loop control system is established to real-time monitor the change in the target features extracted form received signals. The transmitter adapts its transmitted waveform to suit the time-invariant environment. Finally, the simulation results show that, as compared with the waveform design method based on the MAP criterion, the proposed waveform design algorithm is able to improve the performance of TIR estimation for extended targets with multiple iterations, and has a relatively lower level of complexity.

  16. Greedy algorithms for diffuse optical tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of computational efficiency. The main advantage of this study is that the forward diffusion equation solver need not be repeatedly solved.

  17. Vulnerability of shallow groundwater and drinking-water wells to nitrate in the United States

    USGS Publications Warehouse

    Nolan, Bernard T.; Hitt, Kerie J.

    2006-01-01

    Two nonlinear models were developed at the national scale to (1) predict contamination of shallow ground water (typically < 5 m deep) by nitrate from nonpoint sources and (2) to predict ambient nitrate concentration in deeper supplies used for drinking. The new models have several advantages over previous national-scale approaches. First, they predict nitrate concentration (rather than probability of occurrence), which can be directly compared with water-quality criteria. Second, the models share a mechanistic structure that segregates nitrogen (N) sources and physical factors that enhance or restrict nitrate transport and accumulation in ground water. Finally, data were spatially averaged to minimize small-scale variability so that the large-scale influences of N loading, climate, and aquifer characteristics could more readily be identified. Results indicate that areas with high N application, high water input, well-drained soils, fractured rocks or those with high effective porosity, and lack of attenuation processes have the highest predicted nitrate concentration. The shallow groundwater model (mean square error or MSE = 2.96) yielded a coefficient of determination (R2) of 0.801, indicating that much of the variation in nitrate concentration is explained by the model. Moderate to severe nitrate contamination is predicted to occur in the High Plains, northern Midwest, and selected other areas. The drinking-water model performed comparably (MSE = 2.00, R2 = 0.767) and predicts that the number of users on private wells and residing in moderately contaminated areas (>5 to ≤10 mg/L nitrate) decreases by 12% when simulation depth increases from 10 to 50 m.

  18. Vulnerability of shallow groundwater and drinking-water wells to nitrate in the United States.

    PubMed

    Nolan, Bernard T; Hitt, Kerie J

    2006-12-15

    Two nonlinear models were developed at the national scale to (1) predict contamination of shallow ground water (typically < 5 m deep) by nitrate from nonpoint sources and (2) to predict ambient nitrate concentration in deeper supplies used for drinking. The new models have several advantages over previous national-scale approaches. First, they predict nitrate concentration (rather than probability of occurrence), which can be directly compared with water-quality criteria. Second, the models share a mechanistic structure that segregates nitrogen (N) sources and physical factors that enhance or restrict nitrate transport and accumulation in ground water. Finally, data were spatially averaged to minimize small-scale variability so that the large-scale influences of N loading, climate, and aquifer characteristics could more readily be identified. Results indicate that areas with high N application, high water input, well-drained soils, fractured rocks or those with high effective porosity, and lack of attenuation processes have the highest predicted nitrate concentration. The shallow groundwater model (mean square error or MSE = 2.96) yielded a coefficient of determination (R(2)) of 0.801, indicating that much of the variation in nitrate concentration is explained by the model. Moderate to severe nitrate contamination is predicted to occur in the High Plains, northern Midwest, and selected other areas. The drinking-water model performed comparably (MSE = 2.00, R(2) = 0.767) and predicts that the number of users on private wells and residing in moderately contaminated areas (>5 to < or =10 mg/L nitrate) decreases by 12% when simulation depth increases from 10 to 50 m.

  19. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  20. Matching weights to simultaneously compare three treatment groups: Comparison to three-way matching

    PubMed Central

    Yoshida, Kazuki; Hernández-Díaz, Sonia; Solomon, Daniel H.; Jackson, John W.; Gagne, Joshua J.; Glynn, Robert J.; Franklin, Jessica M.

    2017-01-01

    BACKGROUND Propensity score matching is a commonly used tool. However, its use in settings with more than two treatment groups has been less frequent. We examined the performance of a recently developed propensity score weighting method in the three treatment group setting. METHODS The matching weight method is an extension of inverse probability of treatment weighting (IPTW) that reweights both exposed and unexposed groups to emulate a propensity score matched population. Matching weights can generalize to multiple treatment groups. The performance of matching weights in the three-group setting was compared via simulation to three-way 1:1:1 propensity score matching and IPTW. We also applied these methods to an empirical example that compared the safety of three analgesics. RESULTS Matching weights had similar bias, but better mean squared error (MSE) compared to three-way matching in all scenarios. The benefits were more pronounced in scenarios with a rare outcome, unequally sized treatment groups, or poor covariate overlap. IPTW’s performance was highly dependent on covariate overlap. In the empirical example, matching weights achieved the best balance for 24 out of 35 covariates. Hazard ratios were numerically similar to matching. However, the confidence intervals were narrower for matching weights. CONCLUSIONS Matching weights demonstrated improved performance over three-way matching in terms of MSE, particularly in simulation scenarios where finding matched subjects was difficult. Given its natural extension to settings with even more than three groups, we recommend matching weights for comparing outcomes across multiple treatment groups, particularly in settings with rare outcomes or unequal exposure distributions. PMID:28151746

  1. Artificial neural network modelling for organic and total nitrogen removal of aerobic granulation under steady-state condition.

    PubMed

    Gong, H; Pishgar, R; Tay, J H

    2018-04-27

    Aerobic granulation is a recent technology with high level of complexity and sensitivity to environmental and operational conditions. Artificial neural networks (ANNs), computational tools capable of describing complex non-linear systems, are the best fit to simulate aerobic granular bioreactors. In this study, two feedforward backpropagation ANN models were developed to predict chemical oxygen demand (Model I) and total nitrogen removal efficiencies (Model II) of aerobic granulation technology under steady-state condition. Fundamentals of ANN models and the steps to create them were briefly reviewed. The models were respectively fed with 205 and 136 data points collected from laboratory-, pilot-, and full-scale studies on aerobic granulation technology reported in the literature. Initially, 60%, 20%, and 20%, and 80%, 10%, and 10% of the points in the corresponding datasets were randomly chosen and used for training, testing, and validation of Model I, and Model II, respectively. Overall coefficient of determination (R 2 ) value and mean squared error (MSE) of the two models were initially 0.49 and 15.5, and 0.37 and 408, respectively. To improve the model performance, two data division methods were used. While one method is generic and potentially applicable to other fields, the other can only be applied to modelling the performance of aerobic granular reactors. R 2 value and MSE were improved to 0.90 and 2.54, and 0.81 and 121.56, respectively, after applying the new data division methods. The results demonstrated that ANN-based models were capable simulation approach to predict a complicated process like aerobic granulation.

  2. A Hybrid Model for Predicting the Prevalence of Schistosomiasis in Humans of Qianjiang City, China

    PubMed Central

    Wang, Ying; Lu, Zhouqin; Tian, Lihong; Tan, Li; Shi, Yun; Nie, Shaofa; Liu, Li

    2014-01-01

    Backgrounds/Objective Schistosomiasis is still a major public health problem in China, despite the fact that the government has implemented a series of strategies to prevent and control the spread of the parasitic disease. Advanced warning and reliable forecasting can help policymakers to adjust and implement strategies more effectively, which will lead to the control and elimination of schistosomiasis. Our aim is to explore the application of a hybrid forecasting model to track the trends of the prevalence of schistosomiasis in humans, which provides a methodological basis for predicting and detecting schistosomiasis infection in endemic areas. Methods A hybrid approach combining the autoregressive integrated moving average (ARIMA) model and the nonlinear autoregressive neural network (NARNN) model to forecast the prevalence of schistosomiasis in the future four years. Forecasting performance was compared between the hybrid ARIMA-NARNN model, and the single ARIMA or the single NARNN model. Results The modelling mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model was 0.1869×10−4, 0.0029, 0.0419 with a corresponding testing error of 0.9375×10−4, 0.0081, 0.9064, respectively. These error values generated with the hybrid model were all lower than those obtained from the single ARIMA or NARNN model. The forecasting values were 0.75%, 0.80%, 0.76% and 0.77% in the future four years, which demonstrated a no-downward trend. Conclusion The hybrid model has high quality prediction accuracy in the prevalence of schistosomiasis, which provides a methodological basis for future schistosomiasis monitoring and control strategies in the study area. It is worth attempting to utilize the hybrid detection scheme in other schistosomiasis-endemic areas including other infectious diseases. PMID:25119882

  3. Linking Field and Satellite Observations to Reveal Differences in Single vs. Double-Cropped Soybean Yields in Central Brazil

    NASA Astrophysics Data System (ADS)

    Jeffries, G. R.; Cohn, A.

    2016-12-01

    Soy-corn double cropping (DC) has been widely adopted in Central Brazil alongside single cropped (SC) soybean production. DC involves different cropping calendars, soy varieties, and may be associated with different crop yield patterns and volatility than SC. Study of the performance of the region's agriculture in a changing climate depends on tracking differences in the productivity of SC vs. DC, but has been limited by crop yield data that conflate the two systems. We predicted SC and DC yields across Central Brazil, drawing on field observations and remotely sensed data. We first modeled field yield estimates as a function of remotely sensed DC status and vegetation index (VI) metrics, and other management and biophysical factors. We then used the statistical model estimated to predict SC and DC soybean yields at each 500 m2 grid cell of Central Brazil for harvest years 2001 - 2015. The yield estimation model was constructed using 1) a repeated cross-sectional survey of soybean yields and management factors for years 2007-2015, 2) a custom agricultural land cover classification dataset which assimilates earlier datasets for the region, and 3) 500m 8-day MODIS image composites used to calculate the wide dynamic range vegetation index (WDRVI) and derivative metrics such as area under the curve for WDRVI values in critical crop development periods. A statistical yield estimation model which primarily entails WDRVI metrics, DC status, and spatial fixed effects was developed on a subset of the yield dataset. Model validation was conducted by predicting previously withheld yield records, and then assessing error and goodness-of-fit for predicted values with metrics including root mean squared error (RMSE), mean squared error (MSE), and R2. We found a statistical yield estimation model which incorporates WDRVI and DC status to be way to estimate crop yields over the region. Statistical properties of the resulting gridded yield dataset may be valuable for understanding linkages between crop yields, farm management factors, and climate.

  4. Pour une approche des grammaires d'apprentissage (Toward an Approach to Learners' Grammars).

    ERIC Educational Resources Information Center

    Feve, Guy

    1984-01-01

    An approach to teaching grammar that combines an understanding of the error patterns of nonnative speakers and a theoretical model that describes that language is better suited than most to the actual audience of that instruction. (MSE)

  5. The impact of breathing guidance and prospective gating during thoracic 4DCT imaging: an XCAT study utilizing lung cancer patient motion

    NASA Astrophysics Data System (ADS)

    Pollock, Sean; Kipritidis, John; Lee, Danny; Bernatowicz, Kinga; Keall, Paul

    2016-09-01

    Two interventions to overcome the deleterious impact irregular breathing has on thoracic-abdominal 4D computed tomography (4DCT) are (1) facilitating regular breathing using audiovisual biofeedback (AVB), and (2) prospective respiratory gating of the 4DCT scan based on the real-time respiratory motion. The purpose of this study was to compare the impact of AVB and gating on 4DCT imaging using the 4D eXtended cardiac torso (XCAT) phantom driven by patient breathing patterns. We obtained simultaneous measurements of chest and abdominal walls, thoracic diaphragm, and tumor motion from 6 lung cancer patients under two breathing conditions: (1) AVB, and (2) free breathing. The XCAT phantom was used to simulate 4DCT acquisitions in cine and respiratory gated modes. 4DCT image quality was quantified by artefact detection (NCCdiff), mean square error (MSE), and Dice similarity coefficient of lung and tumor volumes (DSClung, DSCtumor). 4DCT acquisition times and imaging dose were recorded. In cine mode, AVB improved NCCdiff, MSE, DSClung, and DSCtumor by 20% (p  =  0.008), 23% (p  <  0.001), 0.5% (p  <  0.001), and 4.0% (p  <  0.003), respectively. In respiratory gated mode, AVB improved NCCdiff, MSE, and DSClung by 29% (p  <  0.001), 34% (p  <  0.001), 0.4% (p  <  0.001), respectively. AVB increased the cine acquisitions by 15 s and reduced respiratory gated acquisitions by 31 s. AVB increased imaging dose in cine mode by 10%. This was the first study to quantify the impact of breathing guidance and respiratory gating on 4DCT imaging. With the exception of DSCtumor in respiratory gated mode, AVB significantly improved 4DCT image analysis metrics in both cine and respiratory gated modes over free breathing. The results demonstrate that AVB and respiratory-gating can be beneficial interventions to improve 4DCT for cancer radiation therapy, with the biggest gains achieved when these interventions are used simultaneously.

  6. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  7. Fitting membrane resistance along with action potential shape in cardiac myocytes improves convergence: application of a multi-objective parallel genetic algorithm.

    PubMed

    Kaur, Jaspreet; Nygren, Anders; Vigmond, Edward J

    2014-01-01

    Fitting parameter sets of non-linear equations in cardiac single cell ionic models to reproduce experimental behavior is a time consuming process. The standard procedure is to adjust maximum channel conductances in ionic models to reproduce action potentials (APs) recorded in isolated cells. However, vastly different sets of parameters can produce similar APs. Furthermore, even with an excellent AP match in case of single cell, tissue behaviour may be very different. We hypothesize that this uncertainty can be reduced by additionally fitting membrane resistance (Rm). To investigate the importance of Rm, we developed a genetic algorithm approach which incorporated Rm data calculated at a few points in the cycle, in addition to AP morphology. Performance was compared to a genetic algorithm using only AP morphology data. The optimal parameter sets and goodness of fit as computed by the different methods were compared. First, we fit an ionic model to itself, starting from a random parameter set. Next, we fit the AP of one ionic model to that of another. Finally, we fit an ionic model to experimentally recorded rabbit action potentials. Adding the extra objective (Rm, at a few voltages) to the AP fit, lead to much better convergence. Typically, a smaller MSE (mean square error, defined as the average of the squared error between the target AP and AP that is to be fitted) was achieved in one fifth of the number of generations compared to using only AP data. Importantly, the variability in fit parameters was also greatly reduced, with many parameters showing an order of magnitude decrease in variability. Adding Rm to the objective function improves the robustness of fitting, better preserving tissue level behavior, and should be incorporated.

  8. Direct estimation of tracer-kinetic parameter maps from highly undersampled brain dynamic contrast enhanced MRI.

    PubMed

    Guo, Yi; Lingala, Sajan Goud; Zhu, Yinghua; Lebel, R Marc; Nayak, Krishna S

    2017-10-01

    The purpose of this work was to develop and evaluate a T 1 -weighted dynamic contrast enhanced (DCE) MRI methodology where tracer-kinetic (TK) parameter maps are directly estimated from undersampled (k,t)-space data. The proposed reconstruction involves solving a nonlinear least squares optimization problem that includes explicit use of a full forward model to convert parameter maps to (k,t)-space, utilizing the Patlak TK model. The proposed scheme is compared against an indirect method that creates intermediate images by parallel imaging and compressed sensing before to TK modeling. Thirteen fully sampled brain tumor DCE-MRI scans with 5-second temporal resolution are retrospectively undersampled at rates R = 20, 40, 60, 80, and 100 for each dynamic frame. TK maps are quantitatively compared based on root mean-squared-error (rMSE) and Bland-Altman analysis. The approach is also applied to four prospectively R = 30 undersampled whole-brain DCE-MRI data sets. In the retrospective study, the proposed method performed statistically better than indirect method at R ≥ 80 for all 13 cases. This approach provided restoration of TK parameter values with less errors in tumor regions of interest, an improvement compared to a state-of-the-art indirect method. Applied prospectively, the proposed method provided whole-brain, high-resolution TK maps with good image quality. Model-based direct estimation of TK maps from k,t-space DCE-MRI data is feasible and is compatible up to 100-fold undersampling. Magn Reson Med 78:1566-1578, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  9. Temporal subtraction contrast-enhanced dedicated breast CT

    PubMed Central

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.

    2016-01-01

    Purpose To develop a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. Methods An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, Intensity Difference Adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using Normalized Cross Correlation (NCC), Symmetric Uncertainty Coefficient (SUC), Normalized Mutual Information (NMI), Mean Square Error (MSE) and Target Registration Error (TRE). Results The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE(0–16%), NCC (0–6%), NMI (0–13%) and TRE (0–34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Conclusion Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake. PMID:27494376

  10. The design and validation of a hybrid digital-signal-processing plug-in for traditional cochlear implant speech processors.

    PubMed

    Hajiaghababa, Fatemeh; Marateb, Hamid R; Kermani, Saeed

    2018-06-01

    Cochlear implants (CIs) are electronic devices restoring partial hearing to deaf individuals with profound hearing loss. In this paper, a new plug-in for traditional IIR filter-banks (FBs) is presented for cochlear implants based on wavelet neural networks (WNNs). Having provided such a plug-in for commercially available CIs, it is possible not only to use available hardware in the market but also to optimize their performance compared with the-state-of-the-art. An online database of Dutch diphone perception was used in our study. The weights of the WNNs were tuned using particle swarm optimization (PSO) on a training set (speech-shaped noise (SSN) of 2 dB SNR), while its performance was assessed on a test set in terms of objective and composite measures in the hold-out validation framework. The cost function was defined based on the combination of mean square error (MSE), short‑time objective intelligibility (STOI) criteria on the training set. Variety of performance indices were used including segmental signal- to -noise ratio (SNRseg), MSE, STOI, log-likelihood ratio (LLR), weighted spectral slope (WSS), and composite measures C sig , C bak and C ovl . Meanwhile, the following CI speech processing techniques were used for comparison: traditional FBs, dual resonance nonlinear (DRNL) and simple dual path nonlinear (SPDN) models. The average SNRseg, MSE, and LLR values for the WNN in the entire data set were 2.496 ± 2.794, 0.086 ± 0.025 and 2.323 ± 0.281, respectively. The proposed method significantly improved MSE, SNR, SNRseg, LLR, C sig C bak and C ovl compared with the other three methods (repeated-measures analysis of variance (ANOVA); P < 0.05). The average running time of the proposed algorithm (written in Matlab R2013a) on the training and test sets for each consonant or vowel on an Intel dual-core 2.10 GHz CPU with 2GB of RAM was 9.91 ± 0.87 (s) and 0.19 ± 0.01 (s), respectively. The proposed algorithm is accurate and precise and is thus a promising new plug-in for traditional CIs. Although the tuned algorithm is relatively fast, it is necessary to use efficient vectorized implementations for real-time CI speech signal processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. L2-Boosting algorithm applied to high-dimensional problems in genomic selection.

    PubMed

    González-Recio, Oscar; Weigel, Kent A; Gianola, Daniel; Naya, Hugo; Rosa, Guilherme J M

    2010-06-01

    The L(2)-Boosting algorithm is one of the most promising machine-learning techniques that has appeared in recent decades. It may be applied to high-dimensional problems such as whole-genome studies, and it is relatively simple from a computational point of view. In this study, we used this algorithm in a genomic selection context to make predictions of yet to be observed outcomes. Two data sets were used: (1) productive lifetime predicted transmitting abilities from 4702 Holstein sires genotyped for 32 611 single nucleotide polymorphisms (SNPs) derived from the Illumina BovineSNP50 BeadChip, and (2) progeny averages of food conversion rate, pre-corrected by environmental and mate effects, in 394 broilers genotyped for 3481 SNPs. Each of these data sets was split into training and testing sets, the latter comprising dairy or broiler sires whose ancestors were in the training set. Two weak learners, ordinary least squares (OLS) and non-parametric (NP) regression were used for the L2-Boosting algorithm, to provide a stringent evaluation of the procedure. This algorithm was compared with BL [Bayesian LASSO (least absolute shrinkage and selection operator)] and BayesA regression. Learning tasks were carried out in the training set, whereas validation of the models was performed in the testing set. Pearson correlations between predicted and observed responses in the dairy cattle (broiler) data set were 0.65 (0.33), 0.53 (0.37), 0.66 (0.26) and 0.63 (0.27) for OLS-Boosting, NP-Boosting, BL and BayesA, respectively. The smallest bias and mean-squared errors (MSEs) were obtained with OLS-Boosting in both the dairy cattle (0.08 and 1.08, respectively) and broiler (-0.011 and 0.006) data sets, respectively. In the dairy cattle data set, the BL was more accurate (bias=0.10 and MSE=1.10) than BayesA (bias=1.26 and MSE=2.81), whereas no differences between these two methods were found in the broiler data set. L2-Boosting with a suitable learner was found to be a competitive alternative for genomic selection applications, providing high accuracy and low bias in genomic-assisted evaluations with a relatively short computational time.

  12. Two-point motional Stark effect diagnostic for Madison Symmetric Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, J.; Den Hartog, D. J.; Caspary, K. J.

    2010-10-15

    A high-precision spectral motional Stark effect (MSE) diagnostic provides internal magnetic field measurements for Madison Symmetric Torus (MST) plasmas. Currently, MST uses two spatial views - on the magnetic axis and on the midminor (off-axis) radius, the latter added recently. A new analysis scheme has been developed to infer both the pitch angle and the magnitude of the magnetic field from MSE spectra. Systematic errors are reduced by using atomic data from atomic data and analysis structure in the fit. Reconstructed current density and safety factor profiles are more strongly and globally constrained with the addition of the off-axis radiusmore » measurement than with the on-axis one only.« less

  13. Grades: Review of Academic Evaluations in Law Schools.

    ERIC Educational Resources Information Center

    Doniger, Thomas

    1980-01-01

    Lack of independent review process in professional schools and refusal of courts to review errors not resulting from arbitrariness, caprice, or bad faith leave student and societal interests in accurate grading inadequately protected. (Journal availability: University of the Pacific, 3201 Donner Way, Sacramento, CA 95817.) (MSE)

  14. Allometric Models for Predicting Aboveground Biomass and Carbon Stock of Tropical Perennial C 4 Grasses in Hawaii

    DOE PAGES

    Youkhana, Adel H.; Ogoshi, Richard M.; Kiniry, James R.; ...

    2017-05-02

    Biomass is a promising renewable energy option that provides a more environmentally sustainable alternative to fossil resources by reducing the net flux of greenhouse gasses to the atmosphere. Yet, allometric models that allow the prediction of aboveground biomass (AGB), biomass carbon (C) stock non-destructively have not yet been developed for tropical perennial C 4 grasses currently under consideration as potential bioenergy feedstock in Hawaii and other subtropical and tropical locations. The objectives of this study were to develop optimal allometric relationships and site-specific models to predict AGB, biomass C stock of napiergrass, energycane, and sugarcane under cultivation practices for renewablemore » energy and validate these site-specific models against independent data sets generated from sites with widely different environments. Several allometric models were developed for each species from data at a low elevation field on the island of Maui, Hawaii. A simple power model with stalk diameter (D) was best related to AGB and biomass C stock for napiergrass, energycane, and sugarcane, (R 2 = 0.98, 0.96, and 0.97, respectively). The models were then tested against data collected from independent fields across an environmental gradient. For all crops, the models over-predicted AGB in plants with lower stalk D, but AGB was under-predicted in plants with higher stalk D. The models using stalk D were better for biomass prediction compared to dewlap H (Height from the base cut to most recently exposed leaf dewlap) models, which showed weak validation performance. Although stalk D model performed better, however, the mean square error (MSE)-systematic was ranged from 23 to 43 % of MSE for all crops. A strong relationship between model coefficient and rainfall was existed, although these were irrigated systems; suggesting a simple site-specific coefficient modulator for rainfall to reduce systematic errors in water-limited areas. These allometric equations provide a tool for farmers in the tropics to estimate perennial C4 grass biomass and C stock during decision-making for land management and as an environmental sustainability indicator within a renewable energy system.« less

  15. TARGETED PRINCIPLE COMPONENT ANALYSIS: A NEW MOTION ARTIFACT CORRECTION APPROACH FOR NEAR-INFRARED SPECTROSCOPY

    PubMed Central

    YÜCEL, MERYEM A.; SELB, JULIETTE; COOPER, ROBERT J.; BOAS, DAVID A.

    2014-01-01

    As near-infrared spectroscopy (NIRS) broadens its application area to different age and disease groups, motion artifacts in the NIRS signal due to subject movement is becoming an important challenge. Motion artifacts generally produce signal fluctuations that are larger than physiological NIRS signals, thus it is crucial to correct for them before obtaining an estimate of stimulus evoked hemodynamic responses. There are various methods for correction such as principle component analysis (PCA), wavelet-based filtering and spline interpolation. Here, we introduce a new approach to motion artifact correction, targeted principle component analysis (tPCA), which incorporates a PCA filter only on the segments of data identified as motion artifacts. It is expected that this will overcome the issues of filtering desired signals that plagues standard PCA filtering of entire data sets. We compared the new approach with the most effective motion artifact correction algorithms on a set of data acquired simultaneously with a collodion-fixed probe (low motion artifact content) and a standard Velcro probe (high motion artifact content). Our results show that tPCA gives statistically better results in recovering hemodynamic response function (HRF) as compared to wavelet-based filtering and spline interpolation for the Velcro probe. It results in a significant reduction in mean-squared error (MSE) and significant enhancement in Pearson’s correlation coefficient to the true HRF. The collodion-fixed fiber probe with no motion correction performed better than the Velcro probe corrected for motion artifacts in terms of MSE and Pearson’s correlation coefficient. Thus, if the experimental study permits, the use of a collodion-fixed fiber probe may be desirable. If the use of a collodion-fixed probe is not feasible, then we suggest the use of tPCA in the processing of motion artifact contaminated data. PMID:25360181

  16. Prediction of dissolved oxygen concentration in hypoxic river systems using support vector machine: a case study of Wen-Rui Tang River, China.

    PubMed

    Ji, Xiaoliang; Shang, Xu; Dahlgren, Randy A; Zhang, Minghua

    2017-07-01

    Accurate quantification of dissolved oxygen (DO) is critically important for managing water resources and controlling pollution. Artificial intelligence (AI) models have been successfully applied for modeling DO content in aquatic ecosystems with limited data. However, the efficacy of these AI models in predicting DO levels in the hypoxic river systems having multiple pollution sources and complicated pollutants behaviors is unclear. Given this dilemma, we developed a promising AI model, known as support vector machine (SVM), to predict the DO concentration in a hypoxic river in southeastern China. Four different calibration models, specifically, multiple linear regression, back propagation neural network, general regression neural network, and SVM, were established, and their prediction accuracy was systemically investigated and compared. A total of 11 hydro-chemical variables were used as model inputs. These variables were measured bimonthly at eight sampling sites along the rural-suburban-urban portion of Wen-Rui Tang River from 2004 to 2008. The performances of the established models were assessed through the mean square error (MSE), determination coefficient (R 2 ), and Nash-Sutcliffe (NS) model efficiency. The results indicated that the SVM model was superior to other models in predicting DO concentration in Wen-Rui Tang River. For SVM, the MSE, R 2 , and NS values for the testing subset were 0.9416 mg/L, 0.8646, and 0.8763, respectively. Sensitivity analysis showed that ammonium-nitrogen was the most significant input variable of the proposal SVM model. Overall, these results demonstrated that the proposed SVM model can efficiently predict water quality, especially for highly impaired and hypoxic river systems.

  17. Modeling additive and non-additive effects in a hybrid population using genome-wide genotyping: prediction accuracy implications

    PubMed Central

    Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph

    2016-01-01

    Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760

  18. Efficacy of extracting indices from large-scale acoustic recordings to monitor biodiversity.

    PubMed

    Buxton, Rachel; McKenna, Megan F; Clapp, Mary; Meyer, Erik; Stabenau, Erik; Angeloni, Lisa M; Crooks, Kevin; Wittemyer, George

    2018-04-20

    Passive acoustic monitoring has the potential to be a powerful approach for assessing biodiversity across large spatial and temporal scales. However, extracting meaningful information from recordings can be prohibitively time consuming. Acoustic indices offer a relatively rapid method for processing acoustic data and are increasingly used to characterize biological communities. We examine the ability of acoustic indices to predict the diversity and abundance of biological sounds within recordings. First we reviewed the acoustic index literature and found that over 60 indices have been applied to a range of objectives with varying success. We then implemented a subset of the most successful indices on acoustic data collected at 43 sites in temperate terrestrial and tropical marine habitats across the continental U.S., developing a predictive model of the diversity of animal sounds observed in recordings. For terrestrial recordings, random forest models using a suite of acoustic indices as covariates predicted Shannon diversity, richness, and total number of biological sounds with high accuracy (R 2 > = 0.94, mean squared error MSE < = 170.2). Among the indices assessed, roughness, acoustic activity, and acoustic richness contributed most to the predictive ability of models. Performance of index models was negatively impacted by insect, weather, and anthropogenic sounds. For marine recordings, random forest models predicted Shannon diversity, richness, and total number of biological sounds with low accuracy (R 2 < = 0.40, MSE > = 195), indicating that alternative methods are necessary in marine habitats. Our results suggest that using a combination of relevant indices in a flexible model can accurately predict the diversity of biological sounds in temperate terrestrial acoustic recordings. Thus, acoustic approaches could be an important contribution to biodiversity monitoring in some habitats in the face of accelerating human-caused ecological change. This article is protected by copyright. All rights reserved.

  19. Identification and functional characterization of HIV-associated neurocognitive disorders with large-scale Granger causality analysis on resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    Chockanathan, Udaysankar; DSouza, Adora M.; Abidin, Anas Z.; Schifitto, Giovanni; Wismüller, Axel

    2018-02-01

    Resting-state functional MRI (rs-fMRI), coupled with advanced multivariate time-series analysis methods such as Granger causality, is a promising tool for the development of novel functional connectivity biomarkers of neurologic and psychiatric disease. Recently large-scale Granger causality (lsGC) has been proposed as an alternative to conventional Granger causality (cGC) that extends the scope of robust Granger causal analyses to high-dimensional systems such as the human brain. In this study, lsGC and cGC were comparatively evaluated on their ability to capture neurologic damage associated with HIV-associated neurocognitive disorders (HAND). Functional brain network models were constructed from rs-fMRI data collected from a cohort of HIV+ and HIV- subjects. Graph theoretic properties of the resulting networks were then used to train a support vector machine (SVM) model to predict clinically relevant parameters, such as HIV status and neuropsychometric (NP) scores. For the HIV+/- classification task, lsGC, which yielded a peak area under the receiver operating characteristic curve (AUC) of 0.83, significantly outperformed cGC, which yielded a peak AUC of 0.61, at all parameter settings tested. For the NP score regression task, lsGC, with a minimum mean squared error (MSE) of 0.75, significantly outperformed cGC, with a minimum MSE of 0.84 (p < 0.001, one-tailed paired t-test). These results show that, at optimal parameter settings, lsGC is better able to capture functional brain connectivity correlates of HAND than cGC. However, given the substantial variation in the performance of the two methods at different parameter settings, particularly for the regression task, improved parameter selection criteria are necessary and constitute an area for future research.

  20. Science-based requirements and operations development for the Maunakea Spectroscopic Explorer

    NASA Astrophysics Data System (ADS)

    McConnachie, Alan W.; Flagey, Nicolas; Murowinski, Rick; Szeto, Kei; Salmon, Derrick; Withington, Kanoa; Mignot, Shan

    2016-07-01

    MSE is a wide field telescope (1.5 square degree field of view) with an aperture of 11.25m. It is dedicated to multi-object spectroscopy at several different spectral resolutions in the range R 2500 - 40000 over a broad wavelength range (0:36 - 1:8μm). MSE enables transformational science in areas as diverse as exoplanetary host characterization; stellar monitoring campaigns; tomographic mapping of the interstellar and intergalactic media; the in-situ chemical tagging of the distant Galaxy; connecting galaxies to the large scale structure of the Universe; measuring the mass functions of cold dark matter sub-halos in galaxy and cluster-scale hosts; reverberation mapping of supermassive black holes in quasars. Here, we summarize the Observatory and describe the development of the top level science requirements and operational concepts. Specifically, we describe the definition of the Science Requirements to be the set of capabilities that allow certain high impact science programs to be conducted. We cross reference these science cases to the science requirements to illustrate the traceability of this approach. We further discuss the operations model for MSE and describe the development of the Operations Concept Document, one of the foundational documents for the project. We also discuss the next stage in the science based development of MSE, specifically the development of the initial Legacy Survey that will occupy a majority of time on the telescope over the first few years of operation.

  1. Prediction of physical and chemical body compositions of purebred and crossbred Nellore cattle using the composition of a rib section.

    PubMed

    Marcondes, M I; Tedeschi, L O; Valadares Filho, S C; Chizzotti, M L

    2012-04-01

    The goal of this research was to develop empirical equations to predict chemical and physical compositions of the carcass and the body using the composition of the 9th- to 11th-rib section (rib(9-11)) and other measurements. A database (n = 246) from 6 studies was developed and comprised 37 bulls (BU), 115 steers (STR), and 94 heifers (HF), of which 132 were Nellore (NEL), 76 were NEL × Angus crossbreds (NA), and 38 were NEL × Simmental crossbreds (NS). The right half carcass and the rib(9-11) from the left half carcass were analyzed for ether extract (EE), CP, and water. The remaining components were chemically analyzed to determine the composition of the body. A stepwise procedure was used to determine the variable inclusion in the regression models. The variables included were EE in the rib(9-11) (EER; %), CP in the rib(9-11) (CPR; %), water in the rib(9-11) (WR; %), visceral fat (VF; %; KPH and mesenteric fats), organs plus viscera (OV; %), carcass dressing percentage (CD; %), cold carcass weight (kg), and empty BW (EBW; kg). No sex or breed effects were found on EE and CP compositions of the carcass (C(EE) and C(CP), respectively; %); the equations were as follows: C(EE) = 4.31 + 0.31 × EER + 1.37 × VF [n = 241; R(2) = 0.83; mean square error (MSE) = 4.53] and C(CP) = 17.92 + 0.60 × CPR - 0.17 × CD (n = 238; R(2) = 0.50; MSE = 1.58). Breed affected water content in the carcass (C(W), %); the equations were as follows: C(W) = 48.74 + 0.28 × WR - 0.017 × EBW for NEL; C(W) = 46.69 + 0.32 × WR - 0.017 × EBW for NA; and C(W) = 38.06 + 0.48 × WR - 0.017 × EBW for NS (n = 243; R(2) = 0.67; MSE = 5.17). A sex effect was found on body chemical EE composition (BW(EE)); the equations were as follows: BW(EE) = 2.75 + 0.33 × EER + 1.80 × VF for BU; BW(EE) = 1.84 + 0.33 × EER + 1.91 × VF for STR; and BW(EE) = 4.77 + 0.33 × EER + 1.28 × VF for HF (n = 243; R(2) = 0.89; MSE = 3.88). No sex or breed effects were found on CP composition in the body (BW(CP)); the equation was as follows: BW(CP) = 14.38 + 0.24 × CPR (n = 240; R(2) = 0.59; MSE = 1.06). A sex effect was found for body water content (BW(W)); the equations were as follows: BW(W) = 38.31 + 0.33 × WR - 1.09 × VF + 0.50 × OV for BU; BW(W) = 45.67 + 0.25 × WR - 1.89 × VF + 0.50 × OV for STR; and BW(W) = 31.61 + 0.47 × WR - 1.06 × VF + 0.50 × OV for HF (n = 241; R(2) = 0.81; MSE = 3.84). The physical carcass composition indicated a breed effect on all components and a sex effect for fat in the carcass. We conclude that body and carcass compositions can be estimated with rib(9-11) for purebred and crossbred NEL animals, but specific equations have to be developed for different groups of animals.

  2. Optimal waveforms design for ultra-wideband impulse radio sensors.

    PubMed

    Li, Bin; Zhou, Zheng; Zou, Weixia; Li, Dejian; Zhao, Chong

    2010-01-01

    Ultra-wideband impulse radio (UWB-IR) sensors should comply entirely with the regulatory spectral limits for elegant coexistence. Under this premise, it is desirable for UWB pulses to improve frequency utilization to guarantee the transmission reliability. Meanwhile, orthogonal waveform division multiple-access (WDMA) is significant to mitigate mutual interferences in UWB sensor networks. Motivated by the considerations, we suggest in this paper a low complexity pulse forming technique, and its efficient implementation on DSP is investigated. The UWB pulse is derived preliminarily with the objective of minimizing the mean square error (MSE) between designed power spectrum density (PSD) and the emission mask. Subsequently, this pulse is iteratively modified until its PSD completely conforms to spectral constraints. The orthogonal restriction is then analyzed and different algorithms have been presented. Simulation demonstrates that our technique can produce UWB waveforms with frequency utilization far surpassing the other existing signals under arbitrary spectral mask conditions. Compared to other orthogonality design schemes, the designed pulses can maintain mutual orthogonality without any penalty on frequency utilization, and hence, are much superior in a WDMA network, especially with synchronization deviations.

  3. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  4. Speckle noise reduction technique for Lidar echo signal based on self-adaptive pulse-matching independent component analysis

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi

    2018-04-01

    Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.

  5. Small area estimation for estimating the number of infant mortality in West Java, Indonesia

    NASA Astrophysics Data System (ADS)

    Anggreyani, Arie; Indahwati, Kurnia, Anang

    2016-02-01

    Demographic and Health Survey Indonesia (DHSI) is a national designed survey to provide information regarding birth rate, mortality rate, family planning and health. DHSI was conducted by BPS in cooperation with National Population and Family Planning Institution (BKKBN), Indonesia Ministry of Health (KEMENKES) and USAID. Based on the publication of DHSI 2012, the infant mortality rate for a period of five years before survey conducted is 32 for 1000 birth lives. In this paper, Small Area Estimation (SAE) is used to estimate the number of infant mortality in districts of West Java. SAE is a special model of Generalized Linear Mixed Models (GLMM). In this case, the incidence of infant mortality is a Poisson distribution which has equdispersion assumption. The methods to handle overdispersion are binomial negative and quasi-likelihood model. Based on the results of analysis, quasi-likelihood model is the best model to overcome overdispersion problem. The basic model of the small area estimation used basic area level model. Mean square error (MSE) which based on resampling method is used to measure the accuracy of small area estimates.

  6. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  7. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  8. An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution

    NASA Astrophysics Data System (ADS)

    Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray

    2013-09-01

    In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when n<=4, because of better PSNR which is achieved by this technique. The final results obtained after steganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.

  9. Optimal Waveforms Design for Ultra-Wideband Impulse Radio Sensors

    PubMed Central

    Li, Bin; Zhou, Zheng; Zou, Weixia; Li, Dejian; Zhao, Chong

    2010-01-01

    Ultra-wideband impulse radio (UWB-IR) sensors should comply entirely with the regulatory spectral limits for elegant coexistence. Under this premise, it is desirable for UWB pulses to improve frequency utilization to guarantee the transmission reliability. Meanwhile, orthogonal waveform division multiple-access (WDMA) is significant to mitigate mutual interferences in UWB sensor networks. Motivated by the considerations, we suggest in this paper a low complexity pulse forming technique, and its efficient implementation on DSP is investigated. The UWB pulse is derived preliminarily with the objective of minimizing the mean square error (MSE) between designed power spectrum density (PSD) and the emission mask. Subsequently, this pulse is iteratively modified until its PSD completely conforms to spectral constraints. The orthogonal restriction is then analyzed and different algorithms have been presented. Simulation demonstrates that our technique can produce UWB waveforms with frequency utilization far surpassing the other existing signals under arbitrary spectral mask conditions. Compared to other orthogonality design schemes, the designed pulses can maintain mutual orthogonality without any penalty on frequency utilization, and hence, are much superior in a WDMA network, especially with synchronization deviations. PMID:22163511

  10. Experimental and artificial neural network based prediction of performance and emission characteristics of DI diesel engine using Calophyllum inophyllum methyl ester at different nozzle opening pressure

    NASA Astrophysics Data System (ADS)

    Vairamuthu, G.; Thangagiri, B.; Sundarapandian, S.

    2018-01-01

    The present work investigates the effect of varying Nozzle Opening Pressures (NOP) from 220 bar to 250 bar on performance, emissions and combustion characteristics of Calophyllum inophyllum Methyl Ester (CIME) in a constant speed, Direct Injection (DI) diesel engine using Artificial Neural Network (ANN) approach. An ANN model has been developed to predict a correlation between specific fuel consumption (SFC), brake thermal efficiency (BTE), exhaust gas temperature (EGT), Unburnt hydrocarbon (UBHC), CO, CO2, NOx and smoke density using load, blend (B0 and B100) and NOP as input data. A standard Back-Propagation Algorithm (BPA) for the engine is used in this model. A Multi Layer Perceptron network (MLP) is used for nonlinear mapping between the input and the output parameters. An ANN model can predict the performance of diesel engine and the exhaust emissions with correlation coefficient (R2) in the range of 0.98-1. Mean Relative Errors (MRE) values are in the range of 0.46-5.8%, while the Mean Square Errors (MSE) are found to be very low. It is evident that the ANN models are reliable tools for the prediction of DI diesel engine performance and emissions. The test results show that the optimum NOP is 250 bar with B100.

  11. Refractive error at birth and its relation to gestational age.

    PubMed

    Varughese, Sara; Varghese, Raji Mathew; Gupta, Nidhi; Ojha, Rishikant; Sreenivas, V; Puliyel, Jacob M

    2005-06-01

    The refractive status of premature infants is not well studied. This study was done to find the norms of refractive error in newborns at different gestational ages. One thousand two hundred three (1203) eyes were examined for refractive error by streak retinoscopy within the first week of life between June 2001 and September 2002. Tropicamide eye drops (0.8%) with phenylephrine 0.5% were used to achieve cycloplegia and mydriasis. The refractive error was measured in the vertical and horizontal meridia in both eyes and was recorded to the nearest dioptre (D). The neonates were grouped in five gestational age groups ranging from 24 weeks to 43 weeks. Extremely preterm babies were found to be myopic with a mean MSE (mean spherical equivalent) of -4.86 D. The MSE was found to progressively decrease (become less myopic) with increasing gestation and was +2.4 D at term. Astigmatism of more than 1 D spherical equivalent was seen in 67.8% of the eyes examined. Among newborns with > 1 D of astigmatism, the astigmatism was with-the-rule (vertical meridian having greater refractive power than horizontal) in 85% and against-the-rule in 15%. Anisometropia of more than 1 D spherical equivalent was seen in 31% babies. Term babies are known to be hypermetropic, and preterm babies with retinopathy of prematurity (ROP) are known to have myopia. This study provides data on the mean spherical equivalent, the degree of astigmatism, and incidence of anisometropia at different gestational ages. This is the largest study in world literature looking at refractive errors at birth against gestational age. It should help understand the norms of refractive errors in preterm babies.

  12. Utilisation d'images aeroportees a tres haute resolution spatiale pour l'estimation de la vigueur des peuplements forestiers du nord-ouest du Nouveau-Brunswick

    NASA Astrophysics Data System (ADS)

    Louis, Ognel Pierre

    Le but de cette etude est de developper un outil permettant d'estimer le niveau de risque de perte de vigueur des peuplements forestiers de la region de Gounamitz au nord-ouest du Nouveau-Brunswick via des donnees d'inventaires forestiers et des donnees de teledetection. Pour ce faire, un marteloscope de 100m x 100m et 20 parcelles d'echantillonnages ont ete delimites. A l'interieur de ces derniers, le niveau de risque de perte de vigueur des arbres ayant un DHP superieur ou egal a 9 cm a ete determine. Afin de caracteriser le risque de perte de vigueur des arbres, leurs positions spatiales ont ete repertoriees a partir d'un GPS en tenant compte des defauts au niveau des tiges. Pour mener a bien ce travail, les indices de vegetation et de textures et les bandes spectrales de l'image aeroportee ont ete extraits et consideres comme variables independantes. Le niveau de risque de perte de vigueur obtenu par espece d'arbre a travers les inventaires forestiers a ete considere comme variable dependante. En vue d'obtenir la superficie des peuplements forestiers de la region d'etude, une classification dirigee des images a partir de l'algorithme maximum de vraisemblance a ete effectuee. Le niveau de risque de perte de vigueur par type d'arbre a ensuite ete estime a l'aide des reseaux de neurones en utilisant un reseau dit perceptron multicouches. Il s'agit d'un modele de reseau de neurones compose de : 11 neurones sur la couche d'entree, correspondant aux variables independantes, 35 neurones sur la couche cachee et 4 neurones sur la couche de sortie. La prediction a partir des reseaux de neurones produit une matrice de confusion qui permet d'obtenir des mesures quantitatives d'estimation, notamment un pourcentage de classification globale de 91,7% pour la prediction du risque de perte de vigueur du peuplement de resineux et de 89,7% pour celui du peuplement de feuillus. L'evaluation de la performance des reseaux de neurones fournit une valeur de MSE globale de 0,04, et une RMSE (Mean Square Error) globale de 0,20 pour le peuplement de feuillus. Quant au peuplement de resineux, une valeur de MSE (Mean Square Error) globale de 0,05 et une valeur de RMSE globale de 0,22 ont ete obtenues. Pour la validation des resultats, le niveau de risque de perte de vigueur predit a ete compare avec le risque de perte de vigueur de reference. Les resultats obtenus donnent un coefficient de determination de 0,98 pour le peuplement de feuillus et 0,93 pour le peuplement de resineux.

  13. Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms

    DTIC Science & Technology

    2004-08-01

    inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared

  14. Measurement of type-I edge localized mode pulse propagation in scrape-off layer using optical system of motional Stark effect diagnostics in JT-60U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, T.; Oyama, N.; Asakura, N.

    2010-04-15

    Propagation of plasma ejected by type-I edge localized mode (ELM) has been measured in scrape-off layer (SOL) of the JT-60U tokamak, using optical system of motional Stark effect (MSE) diagnostics as beam emission spectroscopy (BES) diagnostics through a new technique developed. This MSE/BES system measures D{alpha} emission from heating neutral beam excited by collisions with the ejected plasma, as well as background light (e.g., bremsstrahlung). While spatio-temporal change in the beam emission gives information on propagation of the ejected plasma, the background light that is observed simultaneously in all spatial channels veils the information. In order to separate the beammore » emission and the background light, a two-wavelength detector is newly introduced into the MSE/BES system. The detector observes simultaneously at the same spatial point in two distinct wavelengths using two photomultiplier tubes through two interference filters. One of the filters is adjusted to the central wavelength of the beam emission for the MSE diagnostics, and the other is outside the beam emission spectrum. Eliminating the background light, temporal change in the net beam emission in the SOL has been evaluated. Comparing conditionally averaged beam emission with respect to 594 ELMs in a discharge at five spatial channels (0.02-0.3 m outside the main plasma near equatorial plane), radial velocity of the ELM pulse propagation in SOL is evaluated to be 0.8-1.8 km/s ({approx}1.4 km/s for least-mean-squared fitting).« less

  15. Resolution Enhancement in PET Reconstruction Using Collimation

    NASA Astrophysics Data System (ADS)

    Metzler, Scott D.; Matej, Samuel; Karp, Joel S.

    2013-02-01

    Collimation can improve both the spatial resolution and sampling properties compared to the same scanner without collimation. Spatial resolution improves because each original crystal can be conceptually split into two (i.e., doubling the number of in-plane crystals) by masking half the crystal with a high-density attenuator (e.g., tungsten); this reduces coincidence efficiency by 4× since both crystals comprising the line of response (LOR) are masked, but yields 4× as many resolution-enhanced (RE) LORs. All the new RE LORs can be measured by scanning with the collimator in different configurations.In this simulation study, the collimator was assumed to be ideal, neither allowing gamma penetration nor truncating the field of view. Comparisons were made in 2D between an uncollimated small-animal system with 2-mm crystals that were assumed to be perfectly absorbing and the same system with collimation that narrowed the effective crystal size to 1 mm. Digital phantoms included a hot-rod and a single-hot-spot, both in a uniform background with activity ratio of 4:1. In addition to the collimated and uncollimated configurations, angular and spatial wobbling acquisitions of the 2-mm case were also simulated. Similarly, configurations with different combinations of the RE LORs were considered including (i) all LORs, (ii) only those parallel to the 2-mm LORs; and (iii) only cross pairs that are not parallel to the 2-mm LORs. Lastly, quantitative studies were conducted for collimated and uncollimated data using contrast recovery coefficient and mean-squared error (MSE) as metrics. The reconstructions show that for most noise levels there is a substantial improvement in image quality (i.e., visual quality, resolution, and a reduction in artifacts) by using collimation even when there are 4 fewer counts or-in some cases-comparing with the noiseless uncollimated reconstruction. By comparing various configurations of sampling, the results show that it is the matched combination of both improved spatial resolution of each LOR and the increase in the number of LORs that yields improved reconstructions. Further, the quantitative studies show that for low-count scans, the collimated data give better MSE for small lesions and the uncollimated data give better MSE for larger lesions; for highcount studies, the collimated data yield better quantitative values for the entire range of lesion sizes that were evaluated.

  16. TU-CD-BRA-01: A Novel 3D Registration Method for Multiparametric Radiological Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhbardeh, A; Parekth, VS; Jacobs, MA

    2015-06-15

    Purpose: Multiparametric and multimodality radiological imaging methods, such as, magnetic resonance imaging(MRI), computed tomography(CT), and positron emission tomography(PET), provide multiple types of tissue contrast and anatomical information for clinical diagnosis. However, these radiological modalities are acquired using very different technical parameters, e.g.,field of view(FOV), matrix size, and scan planes, which, can lead to challenges in registering the different data sets. Therefore, we developed a hybrid registration method based on 3D wavelet transformation and 3D interpolations that performs 3D resampling and rotation of the target radiological images without loss of information Methods: T1-weighted, T2-weighted, diffusion-weighted-imaging(DWI), dynamic-contrast-enhanced(DCE) MRI and PET/CT were usedmore » in the registration algorithm from breast and prostate data at 3T MRI and multimodality(PET/CT) cases. The hybrid registration scheme consists of several steps to reslice and match each modality using a combination of 3D wavelets, interpolations, and affine registration steps. First, orthogonal reslicing is performed to equalize FOV, matrix sizes and the number of slices using wavelet transformation. Second, angular resampling of the target data is performed to match the reference data. Finally, using optimized angles from resampling, 3D registration is performed using similarity transformation(scaling and translation) between the reference and resliced target volume is performed. After registration, the mean-square-error(MSE) and Dice Similarity(DS) between the reference and registered target volumes were calculated. Results: The 3D registration method registered synthetic and clinical data with significant improvement(p<0.05) of overlap between anatomical structures. After transforming and deforming the synthetic data, the MSE and Dice similarity were 0.12 and 0.99. The average improvement of the MSE in breast was 62%(0.27 to 0.10) and prostate was 63%(0.13 to 0.04;p<0.05). The Dice similarity was in breast 8%(0.91 to 0.99) and for prostate was 89%(0.01 to 0.90;p<0.05) Conclusion: Our 3D wavelet hybrid registration approach registered diverse breast and prostate data of different radiological images(MR/PET/CT) with a high accuracy.« less

  17. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition

    PubMed Central

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-01-01

    Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijiang stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five models. The scatterplots of the predicted results of the six models versus the original daily LST data series show that the hybrid EEMD-LSTM model is superior to the other five models. It is concluded that the proposed hybrid EEMD-LSTM model in this study is a suitable tool for temperature forecasting. PMID:29883381

  18. A Novel Hybrid Data-Driven Model for Daily Land Surface Temperature Forecasting Using Long Short-Term Memory Neural Network Based on Ensemble Empirical Mode Decomposition.

    PubMed

    Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei

    2018-05-21

    Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five models. The scatterplots of the predicted results of the six models versus the original daily LST data series show that the hybrid EEMD-LSTM model is superior to the other five models. It is concluded that the proposed hybrid EEMD-LSTM model in this study is a suitable tool for temperature forecasting.

  19. Do Peripheral Refraction and Aberration Profiles Vary with the Type of Myopia? - An Illustration Using a Ray-Tracing Approach

    PubMed Central

    Bakaraju, Ravi C.; Ehrmann, Klaus; Papas, Eric B.; Ho, Arthur

    2010-01-01

    Purpose Myopia is considered to be the most common refractive error occurring in children and young adults, around the world. Motivated to elucidate how the process of emmetropization is disrupted, potentially causing myopia and its progression, researchers have shown great interest in peripheral refraction. This study assessed the effect of the myopia type, either refractive or axial, on peripheral refraction and aberration profiles. Methods Using customized schematic eye models for myopia in a ray tracing algorithm, peripheral aberrations, including the refractive error, were calculated as a function of myopia type. Results In all the selected models, hyperopic shifts in the mean spherical equivalent (MSE) component were found whose magnitude seemed to be largely dependent on the field angle. The MSE profiles showed larger hyperopic shifts for the axial type of myopic models than the refractive ones and were evident in -4 and -6 D prescriptions. Additionally, greater levels of astigmatic component (J180) were also seen in axial-length-dependent models, while refractive models showed higher levels of spherical aberration and coma. Conclusion This study has indicated that myopic eyes with primarily an axial component may have a greater risk of progression than their refractive counterparts albeit with the same degree of refractive error. This prediction emerges from the presented theoretical ray tracing model and, therefore, requires clinical confirmation.

  20. Can Emotional and Behavioral Dysregulation in Youth Be Decoded from Functional Neuroimaging?

    PubMed

    Portugal, Liana C L; Rosa, Maria João; Rao, Anil; Bebko, Genna; Bertocci, Michele A; Hinze, Amanda K; Bonar, Lisa; Almeida, Jorge R C; Perlman, Susan B; Versace, Amelia; Schirda, Claudiu; Travis, Michael; Gill, Mary Kay; Demeter, Christine; Diwadkar, Vaibhav A; Ciuffetelli, Gary; Rodriguez, Eric; Forbes, Erika E; Sunshine, Jeffrey L; Holland, Scott K; Kowatch, Robert A; Birmaher, Boris; Axelson, David; Horwitz, Sarah M; Arnold, Eugene L; Fristad, Mary A; Youngstrom, Eric A; Findling, Robert L; Pereira, Mirtes; Oliveira, Leticia; Phillips, Mary L; Mourao-Miranda, Janaina

    2016-01-01

    High comorbidity among pediatric disorders characterized by behavioral and emotional dysregulation poses problems for diagnosis and treatment, and suggests that these disorders may be better conceptualized as dimensions of abnormal behaviors. Furthermore, identifying neuroimaging biomarkers related to dimensional measures of behavior may provide targets to guide individualized treatment. We aimed to use functional neuroimaging and pattern regression techniques to determine whether patterns of brain activity could accurately decode individual-level severity on a dimensional scale measuring behavioural and emotional dysregulation at two different time points. A sample of fifty-seven youth (mean age: 14.5 years; 32 males) was selected from a multi-site study of youth with parent-reported behavioral and emotional dysregulation. Participants performed a block-design reward paradigm during functional Magnetic Resonance Imaging (fMRI). Pattern regression analyses consisted of Relevance Vector Regression (RVR) and two cross-validation strategies implemented in the Pattern Recognition for Neuroimaging toolbox (PRoNTo). Medication was treated as a binary confounding variable. Decoded and actual clinical scores were compared using Pearson's correlation coefficient (r) and mean squared error (MSE) to evaluate the models. Permutation test was applied to estimate significance levels. Relevance Vector Regression identified patterns of neural activity associated with symptoms of behavioral and emotional dysregulation at the initial study screen and close to the fMRI scanning session. The correlation and the mean squared error between actual and decoded symptoms were significant at the initial study screen and close to the fMRI scanning session. However, after controlling for potential medication effects, results remained significant only for decoding symptoms at the initial study screen. Neural regions with the highest contribution to the pattern regression model included cerebellum, sensory-motor and fronto-limbic areas. The combination of pattern regression models and neuroimaging can help to determine the severity of behavioral and emotional dysregulation in youth at different time points.

  1. Multiple regression and Artificial Neural Network for long-term rainfall forecasting using large scale climate modes

    NASA Astrophysics Data System (ADS)

    Mekanik, F.; Imteaz, M. A.; Gato-Trinidad, S.; Elmahdi, A.

    2013-10-01

    In this study, the application of Artificial Neural Networks (ANN) and Multiple regression analysis (MR) to forecast long-term seasonal spring rainfall in Victoria, Australia was investigated using lagged El Nino Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) as potential predictors. The use of dual (combined lagged ENSO-IOD) input sets for calibrating and validating ANN and MR Models is proposed to investigate the simultaneous effect of past values of these two major climate modes on long-term spring rainfall prediction. The MR models that did not violate the limits of statistical significance and multicollinearity were selected for future spring rainfall forecast. The ANN was developed in the form of multilayer perceptron using Levenberg-Marquardt algorithm. Both MR and ANN modelling were assessed statistically using mean square error (MSE), mean absolute error (MAE), Pearson correlation (r) and Willmott index of agreement (d). The developed MR and ANN models were tested on out-of-sample test sets; the MR models showed very poor generalisation ability for east Victoria with correlation coefficients of -0.99 to -0.90 compared to ANN with correlation coefficients of 0.42-0.93; ANN models also showed better generalisation ability for central and west Victoria with correlation coefficients of 0.68-0.85 and 0.58-0.97 respectively. The ability of multiple regression models to forecast out-of-sample sets is compatible with ANN for Daylesford in central Victoria and Kaniva in west Victoria (r = 0.92 and 0.67 respectively). The errors of the testing sets for ANN models are generally lower compared to multiple regression models. The statistical analysis suggest the potential of ANN over MR models for rainfall forecasting using large scale climate modes.

  2. New consensus multivariate models based on PLS and ANN studies of sigma-1 receptor antagonists.

    PubMed

    Oliveira, Aline A; Lipinski, Célio F; Pereira, Estevão B; Honorio, Kathia M; Oliveira, Patrícia R; Weber, Karen C; Romero, Roseli A F; de Sousa, Alexsandro G; da Silva, Albérico B F

    2017-10-02

    The treatment of neuropathic pain is very complex and there are few drugs approved for this purpose. Among the studied compounds in the literature, sigma-1 receptor antagonists have shown to be promising. In order to develop QSAR studies applied to the compounds of 1-arylpyrazole derivatives, multivariate analyses have been performed in this work using partial least square (PLS) and artificial neural network (ANN) methods. A PLS model has been obtained and validated with 45 compounds in the training set and 13 compounds in the test set (r 2 training = 0.761, q 2 = 0.656, r 2 test = 0.746, MSE test = 0.132 and MAE test = 0.258). Additionally, multi-layer perceptron ANNs (MLP-ANNs) were employed in order to propose non-linear models trained by gradient descent with momentum backpropagation function. Based on MSE test values, the best MLP-ANN models were combined in a MLP-ANN consensus model (MLP-ANN-CM; r 2 test = 0.824, MSE test = 0.088 and MAE test = 0.197). In the end, a general consensus model (GCM) has been obtained using PLS and MLP-ANN-CM models (r 2 test = 0.811, MSE test = 0.100 and MAE test = 0.218). Besides, the selected descriptors (GGI6, Mor23m, SRW06, H7m, MLOGP, and μ) revealed important features that should be considered when one is planning new compounds of the 1-arylpyrazole class. The multivariate models proposed in this work are definitely a powerful tool for the rational drug design of new compounds for neuropathic pain treatment. Graphical abstract Main scaffold of the 1-arylpyrazole derivatives and the selected descriptors.

  3. A new DWT/MC/DPCM video compression framework based on EBCOT

    NASA Astrophysics Data System (ADS)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  4. A novel edge based embedding in medical images based on unique key generated using sudoku puzzle design.

    PubMed

    Santhi, B; Dheeptha, B

    2016-01-01

    The field of telemedicine has gained immense momentum, owing to the need for transmitting patients' information securely. This paper puts forth a unique method for embedding data in medical images. It is based on edge based embedding and XOR coding. The algorithm proposes a novel key generation technique by utilizing the design of a sudoku puzzle to enhance the security of the transmitted message. The edge blocks of the cover image alone, are utilized to embed the payloads. The least significant bit of the pixel values are changed by XOR coding depending on the data to be embedded and the key generated. Hence the distortion in the stego image is minimized and the information is retrieved accurately. Data is embedded in the RGB planes of the cover image, thus increasing its embedding capacity. Several measures including peak signal noise ratio (PSNR), mean square error (MSE), universal image quality index (UIQI) and correlation coefficient (R) are the image quality measures that have been used to analyze the quality of the stego image. It is evident from the results that the proposed technique outperforms the former methodologies.

  5. Evaluation of normalization methods in mammalian microRNA-Seq data

    PubMed Central

    Garmire, Lana Xia; Subramaniam, Shankar

    2012-01-01

    Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701

  6. Pressure sensor based on the fiber-optic extrinsic Fabry-Perot interferometer

    NASA Astrophysics Data System (ADS)

    Yu, Qingxu; Zhou, Xinlei

    2011-03-01

    Pressure sensors based on fiber-optic extrinsic Fabry-Perot interferometer (EFPI) have been extensively applied in various industrial and biomedical fields. In this paper, some key improvements of EFPI-based pressure sensors such as the controlled thermal bonding technique, diaphragm-based EFPI sensors, and white light interference technology have been reviewed. Recent progress on signal demodulation method and applications of EFPI-based pressure sensors has been introduced. Signal demodulation algorithms based on the cross correlation and mean square error (MSE) estimation have been proposed for retrieving the cavity length of EFPI. Absolute measurement with a resolution of 0.08 nm over large dynamic range has been carried out. For downhole monitoring, an EFPI and a fiber Bragg grating (FBG) cascade multiplexing fiber-optic sensor system has been developed, which can operate in temperature 300 °C with a good long-term stability and extremely low temperature cross-sensitivity. Diaphragm-based EFPI pressure sensors have been successfully used for low pressure and acoustic wave detection. Experimental results show that a sensitivity of 31 mV/Pa in the frequency range of 100 Hz to 12.7 kHz for aeroacoustic wave detection has been obtained.

  7. QSAR study on the antimalarial activity of Plasmodium falciparum dihydroorotate dehydrogenase (PfDHODH) inhibitors.

    PubMed

    Hou, X; Chen, X; Zhang, M; Yan, A

    2016-01-01

    Plasmodium falciparum, the most fatal parasite that causes malaria, is responsible for over one million deaths per year. P. falciparum dihydroorotate dehydrogenase (PfDHODH) has been validated as a promising drug development target for antimalarial therapy since it catalyzes the rate-limiting step for DNA and RNA biosynthesis. In this study, we investigated the quantitative structure-activity relationships (QSAR) of the antimalarial activity of PfDHODH inhibitors by generating four computational models using a multilinear regression (MLR) and a support vector machine (SVM) based on a dataset of 255 PfDHODH inhibitors. All the models display good prediction quality with a leave-one-out q(2) >0.66, a correlation coefficient (r) >0.85 on both training sets and test sets, and a mean square error (MSE) <0.32 on training sets and <0.37 on test sets, respectively. The study indicated that the hydrogen bonding ability, atom polarizabilities and ring complexity are predominant factors for inhibitors' antimalarial activity. The models are capable of predicting inhibitors' antimalarial activity and the molecular descriptors for building the models could be helpful in the development of new antimalarial drugs.

  8. Enhancement of security using structured phase masked in optical image encryption on Fresnel transform domain

    NASA Astrophysics Data System (ADS)

    Yadav, Poonam Lata; Singh, Hukum

    2018-05-01

    To enhance the security in optical image encryption system and to protect it from the attackers, this paper proposes new digital spiral phase mask based on Fresnel Transform. In this cryptosystem the Spiral Phase Mask (SPM) used is a hybrid of Fresnel Zone Plate (FZP) and Radial Hilbert Mask (RHM) which makes the key strong and enhances the security. The different keys used for encryption and decryption purposed make the system much more secure. Proposed scheme uses various structured phase mask which increases the key space also it increases the number of parameters which makes it difficult for the attackers to exactly find the key to recover the original image. We have also used different keys for encryption and decryption purpose to make the system much more secure. The strength of the proposed cryptosystem has been analyzed by simulating on MATLAB 7.9.0(R2008a). Mean Square Errors (MSE) and Peak Signal to Noise Ratio (PSNR) are calculated for the proposed algorithm. The experimental results are provided to highlight the effectiveness and sustainability of proposed cryptosystem and to prove that the cryptosystem is secure for usage.

  9. A novel fuzzy logic-based image steganography method to ensure medical data security.

    PubMed

    Karakış, R; Güler, I; Çapraz, I; Bilir, E

    2015-12-01

    This study aims to secure medical data by combining them into one file format using steganographic methods. The electroencephalogram (EEG) is selected as hidden data, and magnetic resonance (MR) images are also used as the cover image. In addition to the EEG, the message is composed of the doctor׳s comments and patient information in the file header of images. Two new image steganography methods that are based on fuzzy-logic and similarity are proposed to select the non-sequential least significant bits (LSB) of image pixels. The similarity values of the gray levels in the pixels are used to hide the message. The message is secured to prevent attacks by using lossless compression and symmetric encryption algorithms. The performance of stego image quality is measured by mean square of error (MSE), peak signal-to-noise ratio (PSNR), structural similarity measure (SSIM), universal quality index (UQI), and correlation coefficient (R). According to the obtained result, the proposed method ensures the confidentiality of the patient information, and increases data repository and transmission capacity of both MR images and EEG signals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Detection of Wildfires with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Umphlett, B.; Leeman, J.; Morrissey, M. L.

    2011-12-01

    Currently fire detection for the National Oceanic and Atmospheric Administration (NOAA) using satellite data is accomplished with algorithms and error checking human analysts. Artificial neural networks (ANNs) have been shown to be more accurate than algorithms or statistical methods for applications dealing with multiple datasets of complex observed data in the natural sciences. ANNs also deal well with multiple data sources that are not all equally reliable or equally informative to the problem. An ANN was tested to evaluate its accuracy in detecting wildfires utilizing polar orbiter numerical data from the Advanced Very High Resolution Radiometer (AVHRR). Datasets containing locations of known fires were gathered from the NOAA's polar orbiting satellites via the Comprehensive Large Array-data Stewardship System (CLASS). The data was then calibrated and navigation corrected using the Environment for Visualizing Images (ENVI). Fires were located with the aid of shapefiles generated via ArcGIS. Afterwards, several smaller ten pixel by ten pixel datasets were created for each fire (using the ENVI corrected data). Several datasets were created for each fire in order to vary fire position and avoid training the ANN to look only at fires in the center of an image. Datasets containing no fires were also created. A basic pattern recognition neural network was established with the MATLAB neural network toolbox. The datasets were then randomly separated into categories used to train, validate, and test the ANN. To prevent over fitting of the data, the mean squared error (MSE) of the network was monitored and training was stopped when the MSE began to rise. Networks were tested using each channel of the AVHRR data independently, channels 3a and 3b combined, and all six channels. The number of hidden neurons for each input set was also varied between 5-350 in steps of 5 neurons. Each configuration was run 10 times, totaling about 4,200 individual network evaluations. Thirty network parameters were recorded to characterize performance. These parameters were plotted with various data display techniques to determine which network configuration was not only most accurate in fire classification, but also the most computationally efficient. The most accurate fire classification network used all six channels of AVHRR data to achieve an accuracy ranging from 73-90%.

  11. Levels of naturally occurring gamma radiation measured in British homes and their prediction in particular residences.

    PubMed

    Kendall, G M; Wakeford, R; Athanson, M; Vincent, T J; Carter, E J; McColl, N P; Little, M P

    2016-03-01

    Gamma radiation from natural sources (including directly ionising cosmic rays) is an important component of background radiation. In the present paper, indoor measurements of naturally occurring gamma rays that were undertaken as part of the UK Childhood Cancer Study are summarised, and it is shown that these are broadly compatible with an earlier UK National Survey. The distribution of indoor gamma-ray dose rates in Great Britain is approximately normal with mean 96 nGy/h and standard deviation 23 nGy/h. Directly ionising cosmic rays contribute about one-third of the total. The expanded dataset allows a more detailed description than previously of indoor gamma-ray exposures and in particular their geographical variation. Various strategies for predicting indoor natural background gamma-ray dose rates were explored. In the first of these, a geostatistical model was fitted, which assumes an underlying geologically determined spatial variation, superimposed on which is a Gaussian stochastic process with Matérn correlation structure that models the observed tendency of dose rates in neighbouring houses to correlate. In the second approach, a number of dose-rate interpolation measures were first derived, based on averages over geologically or administratively defined areas or using distance-weighted averages of measurements at nearest-neighbour points. Linear regression was then used to derive an optimal linear combination of these interpolation measures. The predictive performances of the two models were compared via cross-validation, using a randomly selected 70 % of the data to fit the models and the remaining 30 % to test them. The mean square error (MSE) of the linear-regression model was lower than that of the Gaussian-Matérn model (MSE 378 and 411, respectively). The predictive performance of the two candidate models was also evaluated via simulation; the OLS model performs significantly better than the Gaussian-Matérn model.

  12. Gaussian Process Regression for Predictive But Interpretable Machine Learning Models: An Example of Predicting Mental Workload across Tasks

    PubMed Central

    Caywood, Matthew S.; Roberts, Daniel M.; Colombe, Jeffrey B.; Greenwald, Hal S.; Weiland, Monica Z.

    2017-01-01

    There is increasing interest in real-time brain-computer interfaces (BCIs) for the passive monitoring of human cognitive state, including cognitive workload. Too often, however, effective BCIs based on machine learning techniques may function as “black boxes” that are difficult to analyze or interpret. In an effort toward more interpretable BCIs, we studied a family of N-back working memory tasks using a machine learning model, Gaussian Process Regression (GPR), which was both powerful and amenable to analysis. Participants performed the N-back task with three stimulus variants, auditory-verbal, visual-spatial, and visual-numeric, each at three working memory loads. GPR models were trained and tested on EEG data from all three task variants combined, in an effort to identify a model that could be predictive of mental workload demand regardless of stimulus modality. To provide a comparison for GPR performance, a model was additionally trained using multiple linear regression (MLR). The GPR model was effective when trained on individual participant EEG data, resulting in an average standardized mean squared error (sMSE) between true and predicted N-back levels of 0.44. In comparison, the MLR model using the same data resulted in an average sMSE of 0.55. We additionally demonstrate how GPR can be used to identify which EEG features are relevant for prediction of cognitive workload in an individual participant. A fraction of EEG features accounted for the majority of the model’s predictive power; using only the top 25% of features performed nearly as well as using 100% of features. Subsets of features identified by linear models (ANOVA) were not as efficient as subsets identified by GPR. This raises the possibility of BCIs that require fewer model features while capturing all of the information needed to achieve high predictive accuracy. PMID:28123359

  13. Estimation of Spatiotemporal Sensitivity Using Band-limited Signals with No Additional Acquisitions for k-t Parallel Imaging.

    PubMed

    Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide

    2018-03-13

    Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.

  14. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  15. SU-G-TeP1-08: LINAC Head Geometry Modeling for Cyber Knife System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, B; Li, Y; Liu, B

    Purpose: Knowledge of the LINAC head information is critical for model based dose calculation algorithms. However, the geometries are difficult to measure precisely. The purpose of this study is to develop linac head models for Cyber Knife system (CKS). Methods: For CKS, the commissioning data were measured in water at 800mm SAD. The measured full width at half maximum (FWHM) for each cone was found greater than the nominal value, this was further confirmed by additional film measurement in air. Diameter correction, cone shift and source shift models (DCM, CSM and SSM) are proposed to account for the differences. Inmore » DCM, a cone-specific correction is applied. For CSM and SSM, a single shift is applied to the cone or source physical position. All three models were validated with an in-house developed pencil beam dose calculation algorithm, and further evaluated by the collimator scatter factor (Sc) correction. Results: The mean square error (MSE) between nominal diameter and the FWHM derived from commissioning data and in-air measurement are 0.54mm and 0.44mm, with the discrepancy increasing with cone size. Optimal shift for CSM and SSM is found to be 9mm upward and 18mm downward, respectively. The MSE in FWHM is reduced to 0.04mm and 0.14mm for DCM and CSM (SSM). Both DCM and CSM result in the same set of Sc values. Combining all cones at SAD 600–1000mm, the average deviation from 1 in Sc of DCM (CSM) and SSM is 2.6% and 2.2%, and reduced to 0.9% and 0.7% for the cones with diameter greater than 15mm. Conclusion: We developed three geometrical models for CKS. All models can handle the discrepancy between vendor specifications and commissioning data. And SSM has the best performance for Sc correction. The study also validated that a point source can be used in CKS dose calculation algorithms.« less

  16. Type 2 diabetes in Vietnam: a cross-sectional, prevalence-based cost-of-illness study.

    PubMed

    Le, Nguyen Tu Dang; Dinh Pham, Luyen; Quang Vo, Trung

    2017-01-01

    According to the International Diabetes Federation, total global health care expenditures for diabetes tripled between 2003 and 2013 because of increases in the number of people with diabetes as well as in the average expenditures per patient. This study aims to provide accurate and timely information about the economic impacts of type 2 diabetes mellitus (T2DM) in Vietnam. The cost-of-illness estimates followed a prospective, prevalence-based approach from the societal perspective of T2DM with 392 selected diabetic patients who received treatment from a public hospital in Ho Chi Minh City, Vietnam, during the 2016 fiscal year. In this study, the annual cost per patient estimate was US $246.10 (95% CI 228.3, 267.2) for 392 patients, which accounted for about 12% (95% CI 11, 13) of the gross domestic product per capita in 2017. That includes US $127.30, US $34.40 and US $84.40 for direct medical costs, direct nonmedical expenditures, and indirect costs, respectively. The cost of pharmaceuticals accounted for the bulk of total expenditures in our study (27.5% of total costs and 53.2% of direct medical costs). A bootstrap analysis showed that female patients had a higher cost of treatment than men at US $48.90 (95% CI 3.1, 95.0); those who received insulin and oral antidiabetics (OAD) also had a statistically significant higher cost of treatment compared to those receiving OAD, US $445.90 (95% CI 181.2, 690.6). The Gradient Boosting Regression (Ensemble method) and Lasso Regression (Generalized Linear Models) were determined to be the best models to predict the cost of T2DM ( R 2 =65.3, mean square error [MSE]=0.94; and R 2 =64.75, MSE=0.96, respectively). The findings of this study serve as a reference for policy decision making in diabetes management as well as adjustment of costs for patients in order to reduce the economic impact of the disease.

  17. Using Least Squares for Error Propagation

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2015-01-01

    The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…

  18. Multifrequency synthesis and extraction using square wave projection patterns for quantitative tissue imaging.

    PubMed

    Nadeau, Kyle P; Rice, Tyler B; Durkin, Anthony J; Tromberg, Bruce J

    2015-11-01

    We present a method for spatial frequency domain data acquisition utilizing a multifrequency synthesis and extraction (MSE) method and binary square wave projection patterns. By illuminating a sample with square wave patterns, multiple spatial frequency components are simultaneously attenuated and can be extracted to determine optical property and depth information. Additionally, binary patterns are projected faster than sinusoids typically used in spatial frequency domain imaging (SFDI), allowing for short (millisecond or less) camera exposure times, and data acquisition speeds an order of magnitude or more greater than conventional SFDI. In cases where sensitivity to superficial layers or scattering is important, the fundamental component from higher frequency square wave patterns can be used. When probing deeper layers, the fundamental and harmonic components from lower frequency square wave patterns can be used. We compared optical property and depth penetration results extracted using square waves to those obtained using sinusoidal patterns on an in vivo human forearm and absorbing tube phantom, respectively. Absorption and reduced scattering coefficient values agree with conventional SFDI to within 1% using both high frequency (fundamental) and low frequency (fundamental and harmonic) spatial frequencies. Depth penetration reflectance values also agree to within 1% of conventional SFDI.

  19. Multifrequency synthesis and extraction using square wave projection patterns for quantitative tissue imaging

    PubMed Central

    Nadeau, Kyle P.; Rice, Tyler B.; Durkin, Anthony J.; Tromberg, Bruce J.

    2015-01-01

    Abstract. We present a method for spatial frequency domain data acquisition utilizing a multifrequency synthesis and extraction (MSE) method and binary square wave projection patterns. By illuminating a sample with square wave patterns, multiple spatial frequency components are simultaneously attenuated and can be extracted to determine optical property and depth information. Additionally, binary patterns are projected faster than sinusoids typically used in spatial frequency domain imaging (SFDI), allowing for short (millisecond or less) camera exposure times, and data acquisition speeds an order of magnitude or more greater than conventional SFDI. In cases where sensitivity to superficial layers or scattering is important, the fundamental component from higher frequency square wave patterns can be used. When probing deeper layers, the fundamental and harmonic components from lower frequency square wave patterns can be used. We compared optical property and depth penetration results extracted using square waves to those obtained using sinusoidal patterns on an in vivo human forearm and absorbing tube phantom, respectively. Absorption and reduced scattering coefficient values agree with conventional SFDI to within 1% using both high frequency (fundamental) and low frequency (fundamental and harmonic) spatial frequencies. Depth penetration reflectance values also agree to within 1% of conventional SFDI. PMID:26524682

  20. Matlab based automatization of an inverse surface temperature modelling procedure for Greenland ice cores using an existing firn densification and heat diffusion model

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Kobashi, Takuro; Kindler, Philippe; Guillevic, Myriam; Leuenberger, Markus

    2016-04-01

    In order to study Northern Hemisphere (NH) climate interactions and variability, getting access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. For example, understanding the causes for changes in the strength of the Atlantic meridional overturning circulation (AMOC) and related effects for the NH [Broecker et al. (1985); Rahmstorf (2002)] or the origin and processes leading the so called Dansgaard-Oeschger events in glacial conditions [Johnsen et al. (1992); Dansgaard et al., 1982] demand accurate and reproducible temperature data. To reveal the surface temperature history, it is suitable to use the isotopic composition of nitrogen (δ15N) from ancient air extracted from ice cores drilled at the Greenland ice sheet. The measured δ15N record of an ice core can be used as a paleothermometer due to the nearly constant isotopic composition of nitrogen in the atmosphere at orbital timescales changes only through firn processes [Severinghaus et. al. (1998); Mariotti (1983)]. To reconstruct the surface temperature for a special drilling site the use of firn models describing gas and temperature diffusion throughout the ice sheet is necessary. For this an existing firn densification and heat diffusion model [Schwander et. al. (1997)] is used. Thereby, a theoretical δ15N record is generated for different temperature and accumulation rate scenarios and compared with measurement data in terms of mean square error (MSE), which leads finally to an optimization problem, namely the finding of a minimal MSE. The goal of the presented study is a Matlab based automatization of this inverse modelling procedure. The crucial point hereby is to find the temperature and accumulation rate input time series which minimizes the MSE. For that, we follow two approaches. The first one is a Monte Carlo type input generator which varies each point in the input time series and calculates the MSE. Then the solutions that fulfil a given limit or the best solutions for a given number of iterations are saved and used as a new input for the next model run. This procedure is repeated until the MSE undercuts a given threshold (e.g. the analytical error of the measurement data). For the second approach, different Matlab based derivative free optimization algorithms (DFOAs) (i.a. the Nelder-Mead Simplex Method, [Lagarias et al. (1998)]) are studied with an adaptation of the manual method of Kindler et al. (2013). For that the DFOAs are used to find those values for the temperature sensitivity and offset for calculating the surface temperature from the oxygen isotope records of the ice core water samples minimizing the MSE. Finally, a comparison to surface temperature records gained with different other methods for Glacial as well as Holocene data is planned. References: Broecker, W. S., Peteet, D., and Rind, D. (1985). Does the ocean-atmosphere system have more than one stable mode of operation? Nature, 315(6014):21-26. Dansgaard, W., Clausen, H., Gundestrup, N., Hammer, C., Johnsen, S., Gristinsdottir, P., and Reeh, N. (1982). A new Greenland deep ice core. Science, 218(4579):1273-1277. Johnsen, S. J., Clausen, H. B., Dansgaard, W., Fuhrer, K., Gundestrup, N., Hammer, C. U., Iversen, P., Jouzel, J., Stauffer, B., and Steffensen, J.P. (1992). Irregular glacial interstadials recorded in new Greenland ice core. Nature, 359:311- 313. Kindler, P., Guillevic, M., Baumgartner M., Schwander J., Landais A. and Leuenberger, M. (2013). NGRIP Temperature Reconstruction from 10 to 120 kyr b2k. Clim. Past, 9:4099-4143. Lagarias, J.C., Reeds, J. A., Wright, M. H., and Wright, P. E. (1998). Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions. SIAM Journal of Optimization, Vol. 9 Number 1, pp. 112-147. Mariotti, A. (1983). Atmospheric nitrogen is a reliable standard for natural 15N abundance measurements. Nature, 303:685- 687. Rahmstorf, S. (2002). Ocean circulation and climate during the past 120,000 years. Nature,419(6903):207-214. Severinghaus, J. P., Sowers, T., Brook, E. J., Alley, R. B., and Bender, M. L. (1998). Timing of abrupt climate change at the end of the Younger Dryas interval from thermally fractionated gases in polar ice. Nature, 391:141-146. Schwander, J., Sowers, T., Barnola, J., Blunier, T., Fuchs, A., and Malaizé, B. (1997). Age scale of the air in the summit ice: implication for glacial-interglacial temperature change. J. Geophys. Res-Atmos., 102(D16):19483-19493.

  1. Capacity and optimal collusion attack channels for Gaussian fingerprinting games

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Moulin, Pierre

    2007-02-01

    In content fingerprinting, the same media covertext - image, video, audio, or text - is distributed to many users. A fingerprint, a mark unique to each user, is embedded into each copy of the distributed covertext. In a collusion attack, two or more users may combine their copies in an attempt to "remove" their fingerprints and forge a pirated copy. To trace the forgery back to members of the coalition, we need fingerprinting codes that can reliably identify the fingerprints of those members. Researchers have been focusing on designing or testing fingerprints for Gaussian host signals and the mean square error (MSE) distortion under some classes of collusion attacks, in terms of the detector's error probability in detecting collusion members. For example, under the assumptions of Gaussian fingerprints and Gaussian attacks (the fingerprinted signals are averaged and then the result is passed through a Gaussian test channel), Moulin and Briassouli1 derived optimal strategies in a game-theoretic framework that uses the detector's error probability as the performance measure for a binary decision problem (whether a user participates in the collusion attack or not); Stone2 and Zhao et al. 3 studied average and other non-linear collusion attacks for Gaussian-like fingerprints; Wang et al. 4 stated that the average collusion attack is the most efficient one for orthogonal fingerprints; Kiyavash and Moulin 5 derived a mathematical proof of the optimality of the average collusion attack under some assumptions. In this paper, we also consider Gaussian cover signals, the MSE distortion, and memoryless collusion attacks. We do not make any assumption about the fingerprinting codes used other than an embedding distortion constraint. Also, our only assumptions about the attack channel are an expected distortion constraint, a memoryless constraint, and a fairness constraint. That is, the colluders are allowed to use any arbitrary nonlinear strategy subject to the above constraints. Under those constraints on the fingerprint embedder and the colluders, fingerprinting capacity is obtained as the solution of a mutual-information game involving probability density functions (pdf's) designed by the embedder and the colluders. We show that the optimal fingerprinting strategy is a Gaussian test channel where the fingerprinted signal is the sum of an attenuated version of the cover signal plus a Gaussian information-bearing noise, and the optimal collusion strategy is to average fingerprinted signals possessed by all the colluders and pass the averaged copy through a Gaussian test channel. The capacity result and the optimal strategies are the same for both the private and public games. In the former scenario, the original covertext is available to the decoder, while in the latter setup, the original covertext is available to the encoder but not to the decoder.

  2. Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures

    DTIC Science & Technology

    2016-06-01

    inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number

  3. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  4. MSE55, a Cdc42 effector protein, induces long cellular extensions in fibroblasts

    PubMed Central

    Burbelo, Peter D.; Snow, Dianne M.; Bahou, Wadie; Spiegel, Sarah

    1999-01-01

    Cdc42 is a member of the Rho GTPase family that regulates multiple cellular activities, including actin polymerization, kinase-signaling activation, and cell polarization. MSE55 is a nonkinase CRIB (Cdc42/Rac interactive-binding) domain-containing molecule of unknown function. Using glutathione S-transferase-capture experiments, we show that MSE55 binds to Cdc42 in a GTP-dependent manner. MSE55 binding to Cdc42 required an intact CRIB domain, because a MSE55 CRIB domain mutant no longer interacted with Cdc42. To study the function of MSE55 we transfected either wild-type MSE55 or a MSE55 CRIB mutant into mammalian cells. In Cos-7 cells, wild-type MSE55 localized at membrane ruffles and increased membrane actin polymerization, whereas expression of the MSE55 CRIB mutant showed fewer membrane ruffles. In contrast to these results, MSE55 induced the formation of long, actin-based protrusions in NIH 3T3 cells as detected by immunofluorescence and live-cell video microscopy. MSE55-induced protrusion formation was blocked by expression of dominant-negative N17Cdc42, but not by expression of dominant-negative N17Rac. These findings indicate that MSE55 is a Cdc42 effector protein that mediates actin cytoskeleton reorganization at the plasma membrane. PMID:10430899

  5. Controls of channel morphology and sediment concentration on flow resistance in a large sand-bed river: A case study of the lower Yellow River

    NASA Astrophysics Data System (ADS)

    Ma, Yuanxu; Huang, He Qing

    2016-07-01

    Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.

  6. Evaluation of non-rigid registration parameters for atlas-based segmentation of CT images of human cochlea

    NASA Astrophysics Data System (ADS)

    Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.

    2017-02-01

    Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.

  7. Two Enhancements of the Logarithmic Least-Squares Method for Analyzing Subjective Comparisons

    DTIC Science & Technology

    1989-03-25

    error term. 1 For this model, the total sum of squares ( SSTO ), defined as n 2 SSTO = E (yi y) i=1 can be partitioned into error and regression sums...of the regression line around the mean value. Mathematically, for the model given by equation A.4, SSTO = SSE + SSR (A.6) A-4 where SSTO is the total...sum of squares (i.e., the variance of the yi’s), SSE is error sum of squares, and SSR is the regression sum of squares. SSTO , SSE, and SSR are given

  8. How you ask matters: an experimental investigation of the influence of mood on memory self-perceptions and their relationship with objective memory.

    PubMed

    Lineweaver, Tara T; Brolsma, Jessica W

    2014-01-01

    Stronger relationships often emerge between mood and memory self-efficacy (MSE) than between MSE and memory abilities. We examined how social desirability, mood congruency and framing influence the mood-MSE relationship. Social desirability correlated with all self-report measures, and covarying social desirability diminished the mood-MSE relationship while enhancing the relationship between MSE and objective memory. Participants rated their memory more harshly on positively than neutrally or negatively worded MSE items. Current mood state did not affect MSE overall or when items were worded positively or neutrally. However, on negatively worded items, participants in a negative mood exhibited lower MSE than participants in a positive mood. Thus, both MSE and the mood-MSE relationship depended upon question wording. These results indicate that controlling social desirability and item framing on MSE questionnaires may reduce their confounding influence on memory self-perceptions and the influence of mood on self-reported abilities, allowing subjective memory to more accurately reflect objective memory in healthy and clinical populations.

  9. Tokamak Equilibrium Reconstruction with MSE-LS Data in DIII-D

    NASA Astrophysics Data System (ADS)

    Lao, L.; Grierson, B.; Burrell, K. H.

    2016-10-01

    Equilibrium analysis of plasmas in DIII-D using EFIT was upgraded to include the internal magnetic field determined from spectroscopic measurements of motional-Stark-effect line-splitting (MSE-LS). MSE-LS provides measurements of the magnitude of the internal magnetic field, rather than the pitch angle as provided by MSE line-polarization (MSE-LP) used in most tokamaks to date. EFIT MSE-LS reconstruction algorithms and verifications are described. The capability of MSE-LS to provide significant constraints on the equilibrium analysis is evaluated. Reconstruction results with both synthetic and experimental MSE-LS data from 10 DIII-D discharges run over a range of conditions show that MSE-LS measurements can contribute to the equilibrium reconstruction of pressure and safety factor profiles. Adequate MSE-LS measurement accuracy and number of spatial locations are necessary. The 7 available experimental measurements provide useful additional constraints when used with other internal measurements. Using MSE-LS as the only internal measurement yields less current profile information. Work supported by the PPPL Subcontract S013769-F and US DOE under DE-FC02-04ER54698.

  10. Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation

    PubMed Central

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-01-01

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994

  11. Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.

    PubMed

    Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman

    2013-10-21

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.

  12. Cluster-Continuum Calculations of Hydration Free Energies of Anions and Group 12 Divalent Cations.

    PubMed

    Riccardi, Demian; Guo, Hao-Bo; Parks, Jerry M; Gu, Baohua; Liang, Liyuan; Smith, Jeremy C

    2013-01-08

    Understanding aqueous phase processes involving group 12 metal cations is relevant to both environmental and biological sciences. Here, quantum chemical methods and polarizable continuum models are used to compute the hydration free energies of a series of divalent group 12 metal cations (Zn(2+), Cd(2+), and Hg(2+)) together with Cu(2+) and the anions OH(-), SH(-), Cl(-), and F(-). A cluster-continuum method is employed, in which gas-phase clusters of the ion and explicit solvent molecules are immersed in a dielectric continuum. Two approaches to define the size of the solute-water cluster are compared, in which the number of explicit waters used is either held constant or determined variationally as that of the most favorable hydration free energy. Results obtained with various polarizable continuum models are also presented. Each leg of the relevant thermodynamic cycle is analyzed in detail to determine how different terms contribute to the observed mean signed error (MSE) and the standard deviation of the error (STDEV) between theory and experiment. The use of a constant number of water molecules for each set of ions is found to lead to predicted relative trends that benefit from error cancellation. Overall, the best results are obtained with MP2 and the Solvent Model D polarizable continuum model (SMD), with eight explicit water molecules for anions and 10 for the metal cations, yielding a STDEV of 2.3 kcal mol(-1) and MSE of 0.9 kcal mol(-1) between theoretical and experimental hydration free energies, which range from -72.4 kcal mol(-1) for SH(-) to -505.9 kcal mol(-1) for Cu(2+). Using B3PW91 with DFT-D3 dispersion corrections (B3PW91-D) and SMD yields a STDEV of 3.3 kcal mol(-1) and MSE of 1.6 kcal mol(-1), to which adding MP2 corrections from smaller divalent metal cation water molecule clusters yields very good agreement with the full MP2 results. Using B3PW91-D and SMD, with two explicit water molecules for anions and six for divalent metal cations, also yields reasonable agreement with experimental values, due in part to fortuitous error cancellation associated with the metal cations. Overall, the results indicate that the careful application of quantum chemical cluster-continuum methods provides valuable insight into aqueous ionic processes that depend on both local and long-range electrostatic interactions with the solvent.

  13. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  14. Adaptive motion artifact reducing algorithm for wrist photoplethysmography application

    NASA Astrophysics Data System (ADS)

    Zhao, Jingwei; Wang, Guijin; Shi, Chenbo

    2016-04-01

    Photoplethysmography (PPG) technology is widely used in wearable heart pulse rate monitoring. It might reveal the potential risks of heart condition and cardiopulmonary function by detecting the cardiac rhythms in physical exercise. However the quality of wrist photoelectric signal is very sensitive to motion artifact since the thicker tissues and the fewer amount of capillaries. Therefore, motion artifact is the major factor that impede the heart rate measurement in the high intensity exercising. One accelerometer and three channels of light with different wavelengths are used in this research to analyze the coupled form of motion artifact. A novel approach is proposed to separate the pulse signal from motion artifact by exploiting their mixing ratio in different optical paths. There are four major steps of our method: preprocessing, motion artifact estimation, adaptive filtering and heart rate calculation. Five healthy young men are participated in the experiment. The speeder in the treadmill is configured as 12km/h, and all subjects would run for 3-10 minutes by swinging the arms naturally. The final result is compared with chest strap. The average of mean square error (MSE) is less than 3 beats per minute (BPM/min). Proposed method performed well in intense physical exercise and shows the great robustness to individuals with different running style and posture.

  15. A novel approach for the elimination of artefacts from EEG signals employing an improved Artificial Immune System algorithm

    NASA Astrophysics Data System (ADS)

    Suja Priyadharsini, S.; Edward Rajan, S.; Femilin Sheniha, S.

    2016-03-01

    Electroencephalogram (EEG) is the recording of electrical activities of the brain. It is contaminated by other biological signals, such as cardiac signal (electrocardiogram), signals generated by eye movement/eye blinks (electrooculogram) and muscular artefact signal (electromyogram), called artefacts. Optimisation is an important tool for solving many real-world problems. In the proposed work, artefact removal, based on the adaptive neuro-fuzzy inference system (ANFIS) is employed, by optimising the parameters of ANFIS. Artificial Immune System (AIS) algorithm is used to optimise the parameters of ANFIS (ANFIS-AIS). Implementation results depict that ANFIS-AIS is effective in removing artefacts from EEG signal than ANFIS. Furthermore, in the proposed work, improved AIS (IAIS) is developed by including suitable selection processes in the AIS algorithm. The performance of the proposed method IAIS is compared with AIS and with genetic algorithm (GA). Measures such as signal-to-noise ratio, mean square error (MSE) value, correlation coefficient, power spectrum density plot and convergence time are used for analysing the performance of the proposed method. From the results, it is found that the IAIS algorithm converges faster than the AIS and performs better than the AIS and GA. Hence, IAIS tuned ANFIS (ANFIS-IAIS) is effective in removing artefacts from EEG signals.

  16. Assessing and simulation of membrane technology for modifying starchy wastewater treatment

    NASA Astrophysics Data System (ADS)

    Hedayati Moghaddam, Amin; Hazrati, Hossein; Sargolzaei, Javad; Shayegan, Jalal

    2017-10-01

    In this study, a hydrophilic polyethersulfone membrane was used to modify the expensive and low efficient conventional treatment method of wheat starch production that would result in a cleaner starch production process. To achieve a cleaner production, the efficiency of starch production was enhanced and the organic loading rate of wastewater that was discharged into treatment system was decreased, simultaneously. To investigate the membrane performance, the dependency of rejection factor and permeate flux on operative parameters such as temperature, flow rate, concentration, and pH of feed were studied. Response surface methodology (RSM) has been applied to arrange the experimental layout which reduced the number of experiments and also the interactions between the parameters were considered. The maximum achieved rejection factor and permeate flux were 97.5% and 2.42 L min-1 m-2, respectively. Furthermore, a fuzzy inference system was selected to model the non-linear relations between input and output variable which cannot easily explained by physical models. The best agreement between the experimental and predicted data for permeate flux was denoted by correlation coefficient index ( R 2) of 0.9752 and mean square error (MSE) of 0.0072 where defuzzification operator was center of rotation (centroid). Similarly, the maximum R 2 for rejection factor was 0.9711 where the defuzzification operator was mean of maxima (mom).

  17. Radiation dose reduction in computed tomography perfusion using spatial-temporal Bayesian methods

    NASA Astrophysics Data System (ADS)

    Fang, Ruogu; Raj, Ashish; Chen, Tsuhan; Sanelli, Pina C.

    2012-03-01

    In current computed tomography (CT) examinations, the associated X-ray radiation dose is of significant concern to patients and operators, especially CT perfusion (CTP) imaging that has higher radiation dose due to its cine scanning technique. A simple and cost-effective means to perform the examinations is to lower the milliampere-seconds (mAs) parameter as low as reasonably achievable in data acquisition. However, lowering the mAs parameter will unavoidably increase data noise and degrade CT perfusion maps greatly if no adequate noise control is applied during image reconstruction. To capture the essential dynamics of CT perfusion, a simple spatial-temporal Bayesian method that uses a piecewise parametric model of the residual function is used, and then the model parameters are estimated from a Bayesian formulation of prior smoothness constraints on perfusion parameters. From the fitted residual function, reliable CTP parameter maps are obtained from low dose CT data. The merit of this scheme exists in the combination of analytical piecewise residual function with Bayesian framework using a simpler prior spatial constrain for CT perfusion application. On a dataset of 22 patients, this dynamic spatial-temporal Bayesian model yielded an increase in signal-tonoise-ratio (SNR) of 78% and a decrease in mean-square-error (MSE) of 40% at low dose radiation of 43mA.

  18. Modeling and predicting the biofilm formation of Salmonella Virchow with respect to temperature and pH.

    PubMed

    Ariafar, M Nima; Buzrul, Sencer; Akçelik, Nefise

    2016-03-01

    Biofilm formation of Salmonella Virchow was monitored with respect to time at three different temperature (20, 25 and 27.5 °C) and pH (5.2, 5.9 and 6.6) values. As the temperature increased at a constant pH level, biofilm formation decreased while as the pH level increased at a constant temperature, biofilm formation increased. Modified Gompertz equation with high adjusted determination coefficient (Radj(2)) and low mean square error (MSE) values produced reasonable fits for the biofilm formation under all conditions. Parameters of the modified Gompertz equation could be described in terms of temperature and pH by use of a second order polynomial function. In general, as temperature increased maximum biofilm quantity, maximum biofilm formation rate and time of acceleration of biofilm formation decreased; whereas, as pH increased; maximum biofilm quantity, maximum biofilm formation rate and time of acceleration of biofilm formation increased. Two temperature (23 and 26 °C) and pH (5.3 and 6.3) values were used up to 24 h to predict the biofilm formation of S. Virchow. Although the predictions did not perfectly match with the data, reasonable estimates were obtained. In principle, modeling and predicting the biofilm formation of different microorganisms on different surfaces under various conditions could be possible.

  19. Stochastic modeling for river pollution of Sungai Perlis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yunus, Nurul Izzaty Mohd.; Rahman, Haliza Abd.; Bahar, Arifah

    2015-02-03

    River pollution has been recognized as a contributor to a wide range of health problems and disorders in human. It can pose health dangers to humans who come into contact with it, either directly or indirectly. Therefore, it is most important to measure the concentration of Biochemical Oxygen Demand (BOD) as a water quality parameter since the parameter has long been the basic means for determining the degree of water pollution in rivers. In this study, BOD is used as a parameter to estimate the water quality at Sungai Perlis. It has been observed that Sungai Perlis is polluted duemore » to lack of management and improper use of resources. Therefore, it is of importance to model the Sungai Perlis water quality in order to describe and predict the water quality systems. The BOD concentration secondary data set is used which was extracted from the Drainage and Irrigation Department Perlis State website. The first order differential equation from Streeter – Phelps model was utilized as a deterministic model. Then, the model was developed into a stochastic model. Results from this study shows that the stochastic model is more adequate to describe and predict the BOD concentration and the water quality systems in Sungai Perlis by having smaller value of mean squared error (MSE)« less

  20. Extended Kalman smoother with differential evolution technique for denoising of ECG signal.

    PubMed

    Panigrahy, D; Sahu, P K

    2016-09-01

    Electrocardiogram (ECG) signal gives a lot of information on the physiology of heart. In reality, noise from various sources interfere with the ECG signal. To get the correct information on physiology of the heart, noise cancellation of the ECG signal is required. In this paper, the effectiveness of extended Kalman smoother (EKS) with the differential evolution (DE) technique for noise cancellation of the ECG signal is investigated. DE is used as an automatic parameter selection method for the selection of ten optimized components of the ECG signal, and those are used to create the ECG signal according to the real ECG signal. These parameters are used by the EKS for the development of the state equation and also for initialization of the parameters of EKS. EKS framework is used for denoising the ECG signal from the single channel. The effectiveness of proposed noise cancellation technique has been evaluated by adding white, colored Gaussian noise and real muscle artifact noise at different SNR to some visually clean ECG signals from the MIT-BIH arrhythmia database. The proposed noise cancellation technique of ECG signal shows better signal to noise ratio (SNR) improvement, lesser mean square error (MSE) and percent of distortion (PRD) compared to other well-known methods.

  1. Efficient DV-HOP Localization for Wireless Cyber-Physical Social Sensing System: A Correntropy-Based Neural Network Learning Scheme

    PubMed Central

    Xu, Yang; Luo, Xiong; Wang, Weiping; Zhao, Wenbing

    2017-01-01

    Integrating wireless sensor network (WSN) into the emerging computing paradigm, e.g., cyber-physical social sensing (CPSS), has witnessed a growing interest, and WSN can serve as a social network while receiving more attention from the social computing research field. Then, the localization of sensor nodes has become an essential requirement for many applications over WSN. Meanwhile, the localization information of unknown nodes has strongly affected the performance of WSN. The received signal strength indication (RSSI) as a typical range-based algorithm for positioning sensor nodes in WSN could achieve accurate location with hardware saving, but is sensitive to environmental noises. Moreover, the original distance vector hop (DV-HOP) as an important range-free localization algorithm is simple, inexpensive and not related to the environment factors, but performs poorly when lacking anchor nodes. Motivated by these, various improved DV-HOP schemes with RSSI have been introduced, and we present a new neural network (NN)-based node localization scheme, named RHOP-ELM-RCC, through the use of DV-HOP, RSSI and a regularized correntropy criterion (RCC)-based extreme learning machine (ELM) algorithm (ELM-RCC). Firstly, the proposed scheme employs both RSSI and DV-HOP to evaluate the distances between nodes to enhance the accuracy of distance estimation at a reasonable cost. Then, with the help of ELM featured with a fast learning speed with a good generalization performance and minimal human intervention, a single hidden layer feedforward network (SLFN) on the basis of ELM-RCC is used to implement the optimization task for obtaining the location of unknown nodes. Since the RSSI may be influenced by the environmental noises and may bring estimation error, the RCC instead of the mean square error (MSE) estimation, which is sensitive to noises, is exploited in ELM. Hence, it may make the estimation more robust against outliers. Additionally, the least square estimation (LSE) in ELM is replaced by the half-quadratic optimization technique. Simulation results show that our proposed scheme outperforms other traditional localization schemes. PMID:28085084

  2. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  3. Mutual coupling, channel model, and BER for curvilinear antenna arrays

    NASA Astrophysics Data System (ADS)

    Huang, Zhiyong

    This dissertation introduces a wireless communications system with an adaptive beam-former and investigates its performance with different antenna arrays. Mutual coupling, real antenna elements and channel models are included to examine the system performance. In a beamforming system, mutual coupling (MC) among the elements can significantly degrade the system performance. However, MC effects can be compensated if an accurate model of mutual coupling is available. A mutual coupling matrix model is utilized to compensate mutual coupling in the beamforming of a uniform circular array (UCA). Its performance is compared with other models in uplink and downlink beamforming scenarios. In addition, the predictions are compared with measurements and verified with results from full-wave simulations. In order to accurately investigate the minimum mean-square-error (MSE) of an adaptive array in MC, two different noise models, the environmental and the receiver noise, are modeled. The minimum MSEs with and without data domain MC compensation are analytically compared. The influence of mutual coupling on the convergence is also examined. In addition, the weight compensation method is proposed to attain the desired array pattern. Adaptive arrays with different geometries are implemented with the minimum MSE algorithm in the wireless communications system to combat interference at the same frequency. The bit-error-rate (BER) of systems with UCA, uniform rectangular array (URA) and UCA with center element are investigated in additive white Gaussian noise plus well-separated signals or random direction signals scenarios. The output SINR of an adaptive array with multiple interferers is analytically examined. The influence of the adaptive algorithm convergence on the BER is investigated. The UCA is then investigated in a narrowband Rician fading channel. The channel model is built and the space correlations are examined. The influence of the number of signal paths, number of the interferers, Doppler spread and convergence are investigated. The tracking mode is introduced to the adaptive array system, and it further improves the BER. The benefit of using faster data rate (wider bandwidth) is discussed. In order to have better performance in a 3D space, the geometries of uniform spherical array (USAs) are presented and different configurations of USAs are discussed. The LMS algorithm based on temporal a priori information is applied to UCAs and USAs to beamform the patterns. Their performances are compared based on simulation results. Based on the analytical and simulation results, it can be concluded that mutual coupling slightly influences the performance of the adaptive array in communication systems. In addition, arrays with curvilinear geometries perform well in AWGN and fading channels.

  4. Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)

    NASA Astrophysics Data System (ADS)

    Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.

    2018-01-01

    The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.

  5. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.

  6. Response Surface Analysis of Experiments with Random Blocks

    DTIC Science & Technology

    1988-09-01

    partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF

  7. Least-Squares Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1990-01-01

    Least Squares Curve Fitting program, AKLSQF, easily and efficiently computes polynomial providing least-squares best fit to uniformly spaced data. Enables user to specify tolerable least-squares error in fit or degree of polynomial. AKLSQF returns polynomial and actual least-squares-fit error incurred in operation. Data supplied to routine either by direct keyboard entry or via file. Written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler.

  8. A comparison of model-based imputation methods for handling missing predictor values in a linear regression model: A simulation study

    NASA Astrophysics Data System (ADS)

    Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah

    2017-08-01

    In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.

  9. Redox-responsive mesoporous selenium delivery of doxorubicin targets MCF-7 cells and synergistically enhances its anti-tumor activity.

    PubMed

    Zhao, Shuang; Yu, Qianqian; Pan, Jiali; Zhou, Yanhui; Cao, Chengwen; Ouyang, Jian-Ming; Liu, Jie

    2017-05-01

    To reduce the side effects and enhance the anti-tumor activities of anticancer drugs in the clinic, the use of nano mesoporous materials, with mesoporous silica (MSN) being the best-studied, has become an effective method of drug delivery. In this study, we successfully synthesized mesoporous selenium (MSe) nanoparticles and first introduced them to the field of drug delivery. Loading MSe with doxorubicin (DOX) is mainly driven by the physical adsorption mechanism of the mesopores, and our results demonstrated that MSe could synergistically enhance the antitumor activity of DOX. Coating the surface of MSe@DOX with Human serum albumin (HSA) generated a unique redox-responsive nanoparticle (HSA-MSe@DOX) that demonstrated glutathione-dependent drug release, increased tumor-targeting effects and enhanced cellular uptake throug nanoparticle interact with SPARC in MCF-7 cells. In vitro, HSA-MSe@DOX prominently induced cancer cell toxicity by synergistically enhancing the effects of MSe and DOX. Moreover, HSA-MSe@DOX possessed tumor-targeting abilities in tumor-bearing nude mice and not only decreased the side effects associated with DOX, but also enhanced its antitumor activity. Therefore, HSA-MSe@DOX is a promising new drug that warrants further evaluation in the treatments of tumors. To reduce the side effects and enhance the anti-tumor activities of anticancer drugs, we successfully synthesized mesoporous selenium (MSe) nanoparticles and first introduced them to the field of drug delivery. Loading MSe with doxorubicin (DOX) is mainly driven by the physical adsorption mechanism of the mesopores. Coating the surface of MSe@DOX with Human serum albumin (HSA) generated a unique redox-responsive nanoparticle (HSA-MSe@DOX) that demonstrated glutathione-dependent drug release, increased tumor-targeting effects and enhanced cellular uptake throug nanoparticle interact with SPARC in MCF-7 cells. In vitro and in vivo, HSA-MSe@DOX possessed tumor-targeting abilities and not only decreased the side effects associated with DOX, but also enhanced its antitumor activity. Therefore, HSA-MSe@DOX is a promising new drug that warrants further evaluation in the treatments of tumors. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  10. Relationship of multiscale entropy to task difficulty and sway velocity in healthy young adults.

    PubMed

    Lubetzky, Anat V; Price, Robert; Ciol, Marcia A; Kelly, Valerie E; McCoy, Sarah W

    2015-01-01

    Multiscale entropy (MSE) is a nonlinear measure of postural control that quantifies how complex the postural sway is by assigning a complexity index to the center of pressure (COP) oscillations. While complexity has been shown to be task dependent, the relationship between sway complexity and level of task challenge is currently unclear. This study tested whether MSE can detect short-term changes in postural control in response to increased standing balance task difficulty in healthy young adults and compared this response to that of a traditional measure of postural steadiness, root mean square of velocity (VRMS). COP data from 20 s of quiet stance were analyzed when 30 healthy young adults stood on the following surfaces: on floor and foam with eyes open and closed and on the compliant side of a Both Sides Up (BOSU) ball with eyes open. Complexity index (CompI) was derived from MSE curves. Repeated measures analysis of variance across standing conditions showed a statistically significant effect of condition (p < 0.001) in both the anterior-posterior and medio-lateral directions for both CompI and VRMS. In the medio-lateral direction there was a gradual increase in CompI and VRMS with increased standing challenge. In the anterior-posterior direction, VRMS showed a gradual increase whereas CompI showed significant differences between the BOSU and all other conditions. CompI was moderately and significantly correlated with VRMS. Both nonlinear and traditional measures of postural control were sensitive to the task and increased with increasing difficulty of standing balance tasks in healthy young adults.

  11. Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials.

    PubMed

    Gomes, Manuel; Ng, Edmond S-W; Grieve, Richard; Nixon, Richard; Carpenter, James; Thompson, Simon G

    2012-01-01

    Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.

  12. A hybrid artificial neural network and particle swarm optimization for prediction of removal of hazardous dye brilliant green from aqueous solution using zinc sulfide nanoparticle loaded on activated carbon.

    PubMed

    Ghaedi, M; Ansari, A; Bahari, F; Ghaedi, A M; Vafaei, A

    2015-02-25

    In the present study, zinc sulfide nanoparticle loaded on activated carbon (ZnS-NP-AC) simply was synthesized in the presence of ultrasound and characterized using different techniques such as SEM and BET analysis. Then, this material was used for brilliant green (BG) removal. To dependency of BG removal percentage toward various parameters including pH, adsorbent dosage, initial dye concentration and contact time were examined and optimized. The mechanism and rate of adsorption was ascertained by analyzing experimental data at various time to conventional kinetic models such as pseudo-first-order and second order, Elovich and intra-particle diffusion models. Comparison according to general criterion such as relative error in adsorption capacity and correlation coefficient confirm the usability of pseudo-second-order kinetic model for explanation of data. The Langmuir models is efficiently can explained the behavior of adsorption system to give full information about interaction of BG with ZnS-NP-AC. A multiple linear regression (MLR) and a hybrid of artificial neural network and partial swarm optimization (ANN-PSO) model were used for prediction of brilliant green adsorption onto ZnS-NP-AC. Comparison of the results obtained using offered models confirm higher ability of ANN model compare to the MLR model for prediction of BG adsorption onto ZnS-NP-AC. Using the optimal ANN-PSO model the coefficient of determination (R(2)) were 0.9610 and 0.9506; mean squared error (MSE) values were 0.0020 and 0.0022 for the training and testing data set, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A hybrid artificial neural network and particle swarm optimization for prediction of removal of hazardous dye brilliant green from aqueous solution using zinc sulfide nanoparticle loaded on activated carbon

    NASA Astrophysics Data System (ADS)

    Ghaedi, M.; Ansari, A.; Bahari, F.; Ghaedi, A. M.; Vafaei, A.

    2015-02-01

    In the present study, zinc sulfide nanoparticle loaded on activated carbon (ZnS-NP-AC) simply was synthesized in the presence of ultrasound and characterized using different techniques such as SEM and BET analysis. Then, this material was used for brilliant green (BG) removal. To dependency of BG removal percentage toward various parameters including pH, adsorbent dosage, initial dye concentration and contact time were examined and optimized. The mechanism and rate of adsorption was ascertained by analyzing experimental data at various time to conventional kinetic models such as pseudo-first-order and second order, Elovich and intra-particle diffusion models. Comparison according to general criterion such as relative error in adsorption capacity and correlation coefficient confirm the usability of pseudo-second-order kinetic model for explanation of data. The Langmuir models is efficiently can explained the behavior of adsorption system to give full information about interaction of BG with ZnS-NP-AC. A multiple linear regression (MLR) and a hybrid of artificial neural network and partial swarm optimization (ANN-PSO) model were used for prediction of brilliant green adsorption onto ZnS-NP-AC. Comparison of the results obtained using offered models confirm higher ability of ANN model compare to the MLR model for prediction of BG adsorption onto ZnS-NP-AC. Using the optimal ANN-PSO model the coefficient of determination (R2) were 0.9610 and 0.9506; mean squared error (MSE) values were 0.0020 and 0.0022 for the training and testing data set, respectively.

  14. Predictive modelling of Lactobacillus casei KN291 survival in fermented soy beverage.

    PubMed

    Zielińska, Dorota; Dorota, Zielińska; Kołożyn-Krajewska, Danuta; Danuta, Kołożyn-Krajewska; Goryl, Antoni; Antoni, Goryl; Motyl, Ilona

    2014-02-01

    The aim of the study was to construct and verify predictive growth and survival models of a potentially probiotic bacteria in fermented soy beverage. The research material included natural soy beverage (Polgrunt, Poland) and the strain of lactic acid bacteria (LAB) - Lactobacillus casei KN291. To construct predictive models for the growth and survival of L. casei KN291 bacteria in the fermented soy beverage we design an experiment which allowed the collection of CFU data. Fermented soy beverage samples were stored at various temperature conditions (5, 10, 15, and 20°C) for 28 days. On the basis of obtained data concerning the survival of L. casei KN291 bacteria in soy beverage at different temperature and time conditions, two non-linear models (r(2)= 0.68-0.93) and two surface models (r(2)=0.76-0.79) were constructed; these models described the behaviour of the bacteria in the product to a satisfactory extent. Verification of the surface models was carried out utilizing the validation data - at 7°C during 28 days. It was found that applied models were well fitted and charged with small systematic errors, which is evidenced by accuracy factor - Af, bias factor - Bf and mean squared error - MSE. The constructed microbiological growth and survival models of L. casei KN291 in fermented soy beverage enable the estimation of products shelf life period, which in this case is defined by the requirement for the level of the bacteria to be above 10(6) CFU/cm(3). The constructed models may be useful as a tool for the manufacture of probiotic foods to estimate of their shelf life period.

  15. Adaptive reconfigurable V-BLAST type equalizer for cognitive MIMO-OFDM radios

    NASA Astrophysics Data System (ADS)

    Ozden, Mehmet Tahir

    2015-12-01

    An adaptive channel shortening equalizer design for multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) radio receivers is considered in this presentation. The proposed receiver has desirable features for cognitive and software defined radio implementations. It consists of two sections: MIMO decision feedback equalizer (MIMO-DFE) and adaptive multiple Viterbi detection. In MIMO-DFE section, a complete modified Gram-Schmidt orthogonalization of multichannel input data is accomplished using sequential processing multichannel Givens lattice stages, so that a Vertical Bell Laboratories Layered Space Time (V-BLAST) type MIMO-DFE is realized at the front-end section of the channel shortening equalizer. Matrix operations, a major bottleneck for receiver operations, are accordingly avoided, and only scalar operations are used. A highly modular and regular radio receiver architecture that has a suitable structure for digital signal processing (DSP) chip and field programable gate array (FPGA) implementations, which are important for software defined radio realizations, is achieved. The MIMO-DFE section of the proposed receiver can also be reconfigured for spectrum sensing and positioning functions, which are important tasks for cognitive radio applications. In connection with adaptive multiple Viterbi detection section, a systolic array implementation for each channel is performed so that a receiver architecture with high computational concurrency is attained. The total computational complexity is given in terms of equalizer and desired response filter lengths, alphabet size, and number of antennas. The performance of the proposed receiver is presented for two-channel case by means of mean squared error (MSE) and probability of error evaluations, which are conducted for time-invariant and time-variant channel conditions, orthogonal and nonorthogonal transmissions, and two different modulation schemes.

  16. Developing Appropriate Methods for Cost-Effectiveness Analysis of Cluster Randomized Trials

    PubMed Central

    Gomes, Manuel; Ng, Edmond S.-W.; Nixon, Richard; Carpenter, James; Thompson, Simon G.

    2012-01-01

    Aim. Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Methods. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering—seemingly unrelated regression (SUR) without a robust standard error (SE)—and 4 methods that recognized clustering—SUR and generalized estimating equations (GEEs), both with robust SE, a “2-stage” nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Results. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92–0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. Conclusions. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters. PMID:22016450

  17. Error propagation of partial least squares for parameters optimization in NIR modeling.

    PubMed

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-05

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.

  18. Error propagation of partial least squares for parameters optimization in NIR modeling

    NASA Astrophysics Data System (ADS)

    Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng

    2018-03-01

    A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.

  19. The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising.

    PubMed

    Donoho, David L; Gavish, Matan; Montanari, Andrea

    2013-05-21

    Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.

  20. Automatic classification of artifactual ICA-components for artifact removal in EEG signals.

    PubMed

    Winkler, Irene; Haufe, Stefan; Tangermann, Michael

    2011-08-02

    Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies.

  1. Predicting carcinogenicity of diverse chemicals using probabilistic neural network modeling approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001; Gupta, Shikha

    Robust global models capable of discriminating positive and non-positive carcinogens; and predicting carcinogenic potency of chemicals in rodents were developed. The dataset of 834 structurally diverse chemicals extracted from Carcinogenic Potency Database (CPDB) was used which contained 466 positive and 368 non-positive carcinogens. Twelve non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals and nonlinearity in the data were evaluated using Tanimoto similarity index and Brock–Dechert–Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) models were constructed for classification and function optimization problems using the carcinogenicity end point in rat. Validation of the models wasmore » performed using the internal and external procedures employing a wide series of statistical checks. PNN constructed using five descriptors rendered classification accuracy of 92.09% in complete rat data. The PNN model rendered classification accuracies of 91.77%, 80.70% and 92.08% in mouse, hamster and pesticide data, respectively. The GRNN constructed with nine descriptors yielded correlation coefficient of 0.896 between the measured and predicted carcinogenic potency with mean squared error (MSE) of 0.44 in complete rat data. The rat carcinogenicity model (GRNN) applied to the mouse and hamster data yielded correlation coefficient and MSE of 0.758, 0.71 and 0.760, 0.46, respectively. The results suggest for wide applicability of the inter-species models in predicting carcinogenic potency of chemicals. Both the PNN and GRNN (inter-species) models constructed here can be useful tools in predicting the carcinogenicity of new chemicals for regulatory purposes. - Graphical abstract: Figure (a) shows classification accuracies (positive and non-positive carcinogens) in rat, mouse, hamster, and pesticide data yielded by optimal PNN model. Figure (b) shows generalization and predictive abilities of the interspecies GRNN model to predict the carcinogenic potency of diverse chemicals. - Highlights: • Global robust models constructed for carcinogenicity prediction of diverse chemicals. • Tanimoto/BDS test revealed structural diversity of chemicals and nonlinearity in data. • PNN/GRNN successfully predicted carcinogenicity/carcinogenic potency of chemicals. • Developed interspecies PNN/GRNN models for carcinogenicity prediction. • Proposed models can be used as tool to predict carcinogenicity of new chemicals.« less

  2. USING TIME VARIANT VOLTAGE TO CALCULATE ENERGY CONSUMPTION AND POWER USE OF BUILDING SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhmalbaf, Atefe; Augenbroe , Godfried

    2015-12-09

    Buildings are the main consumers of electricity across the world. However, in the research and studies related to building performance assessment, the focus has been on evaluating the energy efficiency of buildings whereas the instantaneous power efficiency has been overlooked as an important aspect of total energy consumption. As a result, we never developed adequate models that capture both thermal and electrical characteristics (e.g., voltage) of building systems to assess the impact of variations in the power system and emerging technologies of the smart grid on buildings energy and power performance and vice versa. This paper argues that the powermore » performance of buildings as a function of electrical parameters should be evaluated in addition to systems’ mechanical and thermal behavior. The main advantage of capturing electrical behavior of building load is to better understand instantaneous power consumption and more importantly to control it. Voltage is one of the electrical parameters that can be used to describe load. Hence, voltage dependent power models are constructed in this work and they are coupled with existing thermal energy models. Lack of models that describe electrical behavior of systems also adds to the uncertainty of energy consumption calculations carried out in building energy simulation tools such as EnergyPlus, a common building energy modeling and simulation tool. To integrate voltage-dependent power models with thermal models, the thermal cycle (operation mode) of each system was fed into the voltage-based electrical model. Energy consumption of systems used in this study were simulated using EnergyPlus. Simulated results were then compared with estimated and measured power data. The mean square error (MSE) between simulated, estimated, and measured values were calculated. Results indicate that estimated power has lower MSE when compared with measured data than simulated results. Results discussed in this paper will illustrate the significance of enhancing building energy models with electrical characteristics. This would support different studies such as those related to modernization of the power system that require micro scale building-grid interaction, evaluating building energy efficiency with power efficiency considerations, and also design and control decisions that rely on accuracy of building energy simulation results.« less

  3. Evaluation of Different Dose-Response Models for High Hydrostatic Pressure Inactivation of Microorganisms

    PubMed Central

    2017-01-01

    Modeling of microbial inactivation by high hydrostatic pressure (HHP) requires a plot of the log microbial count or survival ratio versus time data under a constant pressure and temperature. However, at low pressure and temperature values, very long holding times are needed to obtain measurable inactivation. Since the time has a significant effect on the cost of HHP processing it may be reasonable to fix the time at an appropriate value and quantify the inactivation with respect to pressure. Such a plot is called dose-response curve and it may be more beneficial than the traditional inactivation modeling since short holding times with different pressure values can be selected and used for the modeling of HHP inactivation. For this purpose, 49 dose-response curves (with at least 4 log10 reduction and ≥5 data points including the atmospheric pressure value (P = 0.1 MPa), and with holding time ≤10 min) for HHP inactivation of microorganisms obtained from published studies were fitted with four different models, namely the Discrete model, Shoulder model, Fermi equation, and Weibull model, and the pressure value needed for 5 log10 (P5) inactivation was calculated for all the models above. The Shoulder model and Fermi equation produced exactly the same parameter and P5 values, while the Discrete model produced similar or sometimes the exact same parameter values as the Fermi equation. The Weibull model produced the worst fit (had the lowest adjusted determination coefficient (R2adj) and highest mean square error (MSE) values), while the Fermi equation had the best fit (the highest R2adj and lowest MSE values). Parameters of the models and also P5 values of each model can be useful for the further experimental design of HHP processing and also for the comparison of the pressure resistance of different microorganisms. Further experiments can be done to verify the P5 values at given conditions. The procedure given in this study can also be extended for enzyme inactivation by HHP. PMID:28880255

  4. On the Brink of Shifting Paradigms, Molecular Systems Engineering Ethics Needs to Take a Proactive Approach.

    PubMed

    Heidari, Raheleh; Elger, Bernice S; Stutzki, Ralf

    2016-01-01

    Molecular Systems Engineering (MSE) is a paradigm shift in both engineering and life sciences. While the field is still in its infancy the perspectives of MSE in revolutionising technology is promising. MSE will offer a wide range of applications in clinical, biotechnological and engineering fields while simultaneously posing serious questions on the ethical and societal aspects of such technology. The moral and societal aspects of MSE need systematic investigation from scientific and social perspectives. In a democratic setting, the societal outcomes of MSE's cutting-edge technology need to be consulted and influenced by society itself. For this purpose MSE needs inclusive public engagement strategies that bring together the public, ethicists, scientists and policy makers for optimum flow of information that maximizes the impact of public engagement. In this report we present an MSE consortium and its ethics framework for establishing a proactive approach in the study of the ethics of MSE technology.

  5. Chemical Profiling of Re-Du-Ning Injection by Ultra-Performance Liquid Chromatography Coupled with Electrospray Ionization Tandem Quadrupole Time-of-Flight Mass Spectrometry through the Screening of Diagnostic Ions in MSE Mode

    PubMed Central

    Wang, Zhenzhong; Geng, Jianliang; Dai, Yi; Xiao, Wei; Yao, Xinsheng

    2015-01-01

    The broad applications and mechanism explorations of traditional Chinese medicine prescriptions (TCMPs) require a clear understanding of TCMP chemical constituents. In the present study, we describe an efficient and universally applicable analytical approach based on ultra-performance liquid chromatography coupled to electrospray ionization tandem quadrupole time-of-flight mass spectrometry (UPLC-ESI-Q/TOF-MS) with the MSE (E denotes collision energy) data acquisition mode, which allowed the rapid separation and reliable determination of TCMP chemical constituents. By monitoring diagnostic ions in the high energy function of MSE, target peaks of analogous compounds in TCMPs could be rapidly screened and identified. “Re-Du-Ning” injection (RDN), a eutherapeutic traditional Chinese medicine injection (TCMI) that has been widely used to reduce fever caused by viral infections in clinical practice, was studied as an example. In total, 90 compounds, including five new iridoids and one new sesquiterpene, were identified or tentatively characterized by accurate mass measurements within 5 ppm error. This analysis was accompanied by MS fragmentation and reference standard comparison analyses. Furthermore, the herbal sources of these compounds were unambiguously confirmed by comparing the extracted ion chromatograms (EICs) of RDN and ingredient herbal extracts. Our work provides a certain foundation for further studies of RDN. Moreover, the analytical approach developed herein has proven to be generally applicable for profiling the chemical constituents in TCMPs and other complicated mixtures. PMID:25875968

  6. Using Telephone and Informant Assessments to Estimate Missing Modified Mini-Mental State Exam Scores and Rates of Cognitive Decline

    PubMed Central

    Arnold, Alice M.; Newman, Anne B.; Dermond, Norma; Haan, Mary; Fitzpatrick, Annette

    2009-01-01

    Aim To estimate an equivalent to the Modified Mini-Mental State Exam (3MSE), and to compare changes in the 3MSE with and without the estimated scores. Methods Comparability study on a subset of 405 participants, aged ≥70 years, from the Cardiovascular Health Study (CHS), a longitudinal study in 4 United States communities. The 3MSE, the Telephone Interview for Cognitive Status (TICS) and the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) were administered within 30 days of one another. Regression models were developed to predict the 3MSE score from the TICS and/or IQCODE, and the predicted values were used to estimate missing 3MSE scores in longitudinal follow-up of 4,274 CHS participants. Results The TICS explained 67% of the variability in 3MSE scores, with a correlation of 0.82 between predicted and observed scores. The IQCODE alone was not a good estimate of 3MSE score, but improved the model fit when added to the TICS model. Using estimated 3MSE scores classified more participants with low cognition, and rates of decline were greater than when only the observed 3MSE scores were considered. Conclusions 3MSE scores can be reliably estimated from the TICS, with or without the IQCODE. Incorporating these estimates captured more cognitive decline in older adults. PMID:19407461

  7. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines

    PubMed Central

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J.; Raboso, Mariano

    2015-01-01

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements. PMID:26091392

  8. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.

    PubMed

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano

    2015-06-17

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  9. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  10. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  11. An efficient identification approach for stable and unstable nonlinear systems using Colliding Bodies Optimization algorithm.

    PubMed

    Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P

    2015-11-01

    This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Large aperture wide field multi-object spectroscopy for the 2020s: the science and status of the Maunakea Spectroscopic Explorer.

    NASA Astrophysics Data System (ADS)

    Devost, Daniel; McConnachie, Alan; Chambers, Kenneth; Gallagher, Sarah; Maunakea Spectroscopic Explorer Project office, MSE Science Advisory group, MSE Science Team

    2018-01-01

    Numerous international reports have recently highlighted the need for fully dedicated, large aperture, highly multiplexed spectroscopy at a range of spectral resolutions in the OIR wavelength range. Such a facility is the most obvious missing link in the emerging network of international multi-wavelength, astronomy facilities, and enables science from reverberation mapping of black holes to the nucleosynthetic history of the Galaxy, and will follow-up discoveries from the optical through to the radio with facilities such as LSST. The only fully dedicated large aperture MOS facility that is in the design phase is the Maunakea Spectroscopic Explorer (MSE), an 11.4m segmented mirror prime focus telescope with a 1.5 square degree field of view that has 3200 fibers at low (R~2500) and moderate (R~6000) resolution, and 1000 fibers at high (R=20/40000) resolution. I will provide an overview of MSE, describing the science drivers and the current design status, as well as the international partnership, and the results of multiple, newly completed, external reviews for the system and subsystems. The anticipated cost and timeline to first light will also be presented.

  13. On the estimation of brain signal entropy from sparse neuroimaging data

    PubMed Central

    Grandy, Thomas H.; Garrett, Douglas D.; Schmiedek, Florian; Werkle-Bergner, Markus

    2016-01-01

    Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE’s apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs. PMID:27020961

  14. Study of the seismic performance of hybrid A-frame micropile/MSE (mechanically stabilized earth) wall

    NASA Astrophysics Data System (ADS)

    Chen, Yumin; Zhang, Zhichao; Liu, Hanlong

    2017-04-01

    The Hybrid A-Frame Micropile/MSE (mechanically stabilized earth) Wall suitable for mountain roadways is put forward in this study: a pair of vertical and inclined micropiles goes through the backfill region of a highway MSE Wall from the road surface and are then anchored into the foundation. The pile cap and grade beam are placed on the pile tops, and then a road barrier is connected to the grade beam by connecting pieces. The MSE wall's global stability, local stability and impact resistance of the road barrier can be enhanced simultaneously by this design. In order to validate the serviceability of the hybrid A-frame micropile/MSE wall and the reliability of the numerical method, scale model tests and a corresponding numerical simulation were conducted. Then, the seismic performance of the MSE walls before and after reinforcement with micropiles was studied comparatively through numerical methods. The results indicate that the hybrid A-frame micropile/MSE wall can effectively control earthquake-induced deformation, differential settlement at the road surface, bearing pressure on the bottom and acceleration by means of a rigid-soft combination of micropiles and MSE. The accumulated displacement under earthquakes with amplitude of 0.1‒0.5 g is reduced by 36.3%‒46.5%, and the acceleration amplification factor on the top of the wall is reduced by 13.4%, 15.7% and 19.3% based on 0.1, 0.3 and 0.5 g input earthquake loading, respectively. In addition, the earthquake-induced failure mode of the MSE wall in steep terrain is the sliding of the MSE region along the backslope, while the micropiles effectively control the sliding trend. The maximum earthquake-induced pile bending moment is in the interface between MSE and slope foundation, so it is necessary to strengthen the reinforcement of the pile body in the interface. Hence, it is proven that the hybrid A-frame micropile/MSE wall system has good seismic performance.

  15. Study of the antimicrobial activity of cyclic cation-based ionic liquids via experimental and group contribution QSAR model.

    PubMed

    Ghanem, Ouahid Ben; Shah, Syed Nasir; Lévêque, Jean-Marc; Mutalib, M I Abdul; El-Harbawi, Mohanad; Khan, Amir Sada; Alnarabiji, Mohamad Sahban; Al-Absi, Hamada R H; Ullah, Zahoor

    2018-03-01

    Over the past decades, Ionic liquids (ILs) have gained considerable attention from the scientific community in reason of their versatility and performance in many fields. However, they nowadays remain mainly for laboratory scale use. The main barrier hampering their use in a larger scale is their questionable ecological toxicity. This study investigated the effect of hydrophobic and hydrophilic cyclic cation-based ILs against four pathogenic bacteria that infect humans. For that, cations, either of aromatic character (imidazolium or pyridinium) or of non-aromatic nature, (pyrrolidinium or piperidinium), were selected with different alkyl chain lengths and combined with both hydrophilic and hydrophobic anionic moieties. The results clearly demonstrated that introducing of hydrophobic anion namely bis((trifluoromethyl)sulfonyl)amide, [NTF 2 ] and the elongation of the cations substitutions dramatically affect ILs toxicity behaviour. The established toxicity data [50% effective concentration (EC 50 )] along with similar endpoint collected from previous work against Aeromonas hydrophila were combined to developed quantitative structure-activity relationship (QSAR) model for toxicity prediction. The model was developed and validated in the light of Organization for Economic Co-operation and Development (OECD) guidelines strategy, producing good correlation coefficient R 2 of 0.904 and small mean square error (MSE) of 0.095. The reliability of the QSAR model was further determined using k-fold cross validation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. An advanced shape-fitting algorithm applied to quadrupedal mammals: improving volumetric mass estimates

    PubMed Central

    Brassey, Charlotte A.; Gardiner, James D.

    2015-01-01

    Body mass is a fundamental physical property of an individual and has enormous bearing upon ecology and physiology. Generating reliable estimates for body mass is therefore a necessary step in many palaeontological studies. Whilst early reconstructions of mass in extinct species relied upon isolated skeletal elements, volumetric techniques are increasingly applied to fossils when skeletal completeness allows. We apply a new ‘alpha shapes’ (α-shapes) algorithm to volumetric mass estimation in quadrupedal mammals. α-shapes are defined by: (i) the underlying skeletal structure to which they are fitted; and (ii) the value α, determining the refinement of fit. For a given skeleton, a range of α-shapes may be fitted around the individual, spanning from very coarse to very fine. We fit α-shapes to three-dimensional models of extant mammals and calculate volumes, which are regressed against mass to generate predictive equations. Our optimal model is characterized by a high correlation coefficient and mean square error (r2=0.975, m.s.e.=0.025). When applied to the woolly mammoth (Mammuthus primigenius) and giant ground sloth (Megatherium americanum), we reconstruct masses of 3635 and 3706 kg, respectively. We consider α-shapes an improvement upon previous techniques as resulting volumes are less sensitive to uncertainties in skeletal reconstructions, and do not require manual separation of body segments from skeletons. PMID:26361559

  17. Extreme learning machine based optimal embedding location finder for image steganography

    PubMed Central

    Aljeroudi, Yazan

    2017-01-01

    In image steganography, determining the optimum location for embedding the secret message precisely with minimum distortion of the host medium remains a challenging issue. Yet, an effective approach for the selection of the best embedding location with least deformation is far from being achieved. To attain this goal, we propose a novel approach for image steganography with high-performance, where extreme learning machine (ELM) algorithm is modified to create a supervised mathematical model. This ELM is first trained on a part of an image or any host medium before being tested in the regression mode. This allowed us to choose the optimal location for embedding the message with best values of the predicted evaluation metrics. Contrast, homogeneity, and other texture features are used for training on a new metric. Furthermore, the developed ELM is exploited for counter over-fitting while training. The performance of the proposed steganography approach is evaluated by computing the correlation, structural similarity (SSIM) index, fusion matrices, and mean square error (MSE). The modified ELM is found to outperform the existing approaches in terms of imperceptibility. Excellent features of the experimental results demonstrate that the proposed steganographic approach is greatly proficient for preserving the visual information of an image. An improvement in the imperceptibility as much as 28% is achieved compared to the existing state of the art methods. PMID:28196080

  18. Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images

    NASA Astrophysics Data System (ADS)

    Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2017-10-01

    Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.

  19. Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems

    PubMed Central

    Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon

    2011-01-01

    Recently, the range of available Radio Frequency Identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less Mean Squared Error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments. PMID:22346641

  20. Improved Kalman filter method for measurement noise reduction in multi sensor RFID systems.

    PubMed

    Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon

    2011-01-01

    Recently, the range of available radio frequency identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less mean squared error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.

  1. An Application of Robust Method in Multiple Linear Regression Model toward Credit Card Debt

    NASA Astrophysics Data System (ADS)

    Amira Azmi, Nur; Saifullah Rusiman, Mohd; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Salleh, Rohayu Mohd; Hamzah, Nur Shamsidah Amir

    2018-04-01

    Credit card is a convenient alternative replaced cash or cheque, and it is essential component for electronic and internet commerce. In this study, the researchers attempt to determine the relationship and significance variables between credit card debt and demographic variables such as age, household income, education level, years with current employer, years at current address, debt to income ratio and other debt. The provided data covers 850 customers information. There are three methods that applied to the credit card debt data which are multiple linear regression (MLR) models, MLR models with least quartile difference (LQD) method and MLR models with mean absolute deviation method. After comparing among three methods, it is found that MLR model with LQD method became the best model with the lowest value of mean square error (MSE). According to the final model, it shows that the years with current employer, years at current address, household income in thousands and debt to income ratio are positively associated with the amount of credit debt. Meanwhile variables for age, level of education and other debt are negatively associated with amount of credit debt. This study may serve as a reference for the bank company by using robust methods, so that they could better understand their options and choice that is best aligned with their goals for inference regarding to the credit card debt.

  2. Devil's vortex Fresnel lens phase masks on an asymmetric cryptosystem based on phase-truncation in gyrator wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Singh, Hukum

    2016-06-01

    An asymmetric scheme has been proposed for optical double images encryption in the gyrator wavelet transform (GWT) domain. Grayscale and binary images are encrypted separately using double random phase encoding (DRPE) in the GWT domain. Phase masks based on devil's vortex Fresnel Lens (DVFLs) and random phase masks (RPMs) are jointly used in spatial as well as in the Fourier plane. The images to be encrypted are first gyrator transformed and then single-level discrete wavelet transformed (DWT) to decompose LL , HL , LH and HH matrices of approximation, horizontal, vertical and diagonal coefficients. The resulting coefficients from the DWT are multiplied by other RPMs and the results are applied to inverse discrete wavelet transform (IDWT) for obtaining the encrypted images. The images are recovered from their corresponding encrypted images by using the correct parameters of the GWT, DVFL and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The mother wavelet family, DVFL and gyrator transform orders associated with the GWT are extra keys that cause difficulty to an attacker. Thus, the scheme is more secure as compared to conventional techniques. The efficacy of the proposed scheme is verified by computing mean-squared-error (MSE) between recovered and the original images. The sensitivity of the proposed scheme is verified with encryption parameters and noise attacks.

  3. The quadriceps muscle of knee joint modelling Using Hybrid Particle Swarm Optimization-Neural Network (PSO-NN)

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Saadi Bin Ahmad; Marponga Tolos, Siti; Hee, Pah Chin; Ghani, Nor Azura Md; Ramli, Norazan Mohamed; Nasir, Noorhamizah Binti Mohamed; Ksm Kader, Babul Salam Bin; Saiful Huq, Mohammad

    2017-03-01

    Neural framework has for quite a while been known for its ability to handle a complex nonlinear system without a logical model and can learn refined nonlinear associations gives. Theoretically, the most surely understood computation to set up the framework is the backpropagation (BP) count which relies on upon the minimization of the mean square error (MSE). However, this algorithm is not totally efficient in the presence of outliers which usually exist in dynamic data. This paper exhibits the modelling of quadriceps muscle model by utilizing counterfeit smart procedures named consolidated backpropagation neural network nonlinear autoregressive (BPNN-NAR) and backpropagation neural network nonlinear autoregressive moving average (BPNN-NARMA) models in view of utilitarian electrical incitement (FES). We adapted particle swarm optimization (PSO) approach to enhance the performance of backpropagation algorithm. In this research, a progression of tests utilizing FES was led. The information that is gotten is utilized to build up the quadriceps muscle model. 934 preparing information, 200 testing and 200 approval information set are utilized as a part of the improvement of muscle model. It was found that both BPNN-NAR and BPNN-NARMA performed well in modelling this type of data. As a conclusion, the neural network time series models performed reasonably efficient for non-linear modelling such as active properties of the quadriceps muscle with one input, namely output namely muscle force.

  4. Incorporating ligament laxity in a finite element model for the upper cervical spine.

    PubMed

    Lasswell, Timothy L; Cronin, Duane S; Medley, John B; Rasoulinejad, Parham

    2017-11-01

    Predicting physiological range of motion (ROM) using a finite element (FE) model of the upper cervical spine requires the incorporation of ligament laxity. The effect of ligament laxity can be observed only on a macro level of joint motion and is lost once ligaments have been dissected and preconditioned for experimental testing. As a result, although ligament laxity values are recognized to exist, specific values are not directly available in the literature for use in FE models. The purpose of the current study is to propose an optimization process that can be used to determine a set of ligament laxity values for upper cervical spine FE models. Furthermore, an FE model that includes ligament laxity is applied, and the resulting ROM values are compared with experimental data for physiological ROM, as well as experimental data for the increase in ROM when a Type II odontoid fracture is introduced. The upper cervical spine FE model was adapted from a 50th percentile male full-body model developed with the Global Human Body Models Consortium (GHBMC). FE modeling was performed in LS-DYNA and LS-OPT (Livermore Software Technology Group) was used for ligament laxity optimization. Ordinate-based curve matching was used to minimize the mean squared error (MSE) between computed load-rotation curves and experimental load-rotation curves under flexion, extension, and axial rotation with pure moment loads from 0 to 3.5 Nm. Lateral bending was excluded from the optimization because the upper cervical spine was considered to be primarily responsible for flexion, extension, and axial rotation. Based on recommendations from the literature, four varying inputs representing laxity in select ligaments were optimized to minimize the MSE. Funding was provided by the Natural Sciences and Engineering Research Council of Canada as well as GHMBC. The present study was funded by the Natural Sciences and Engineering Research Council of Canada to support the work of one graduate student. There are no conflicts of interest to be reported. The MSE was reduced to 0.28 in the FE model with optimized ligament laxity compared with an MSE 0f 4.16 in the FE model without laxity. In all load cases, incorporating ligament laxity improved the agreement between the ROM of the FE model and the ROM of the experimental data. The ROM for axial rotation and extension was within one standard deviation of the experimental data. The ROM for flexion and lateral bending was outside one standard deviation of the experimental data, but a compromise was required to use one set of ligament laxity values to achieve a best fit to all load cases. Atlanto-occipital motion was compared as a ratio to overall ROM, and only in extension did the inclusion of ligament laxity not improve the agreement. After a Type II odontoid fracture was incorporated into the model, the increase in ROM was consistent with experimental data from the literature. The optimization approach used in this study provided values for ligament laxities that, when incorporated into the FE model, generally improved the ROM response when compared with experimental data. Successfully modeling a Type II odontoid fracture showcased the robustness of the FE model, which can now be used in future biomechanics studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Chemical library subset selection algorithms: a unified derivation using spatial statistics.

    PubMed

    Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F

    2002-01-01

    If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.

  6. Design of biometrics identification system on palm vein using infrared light

    NASA Astrophysics Data System (ADS)

    Syafiq, Muhammad; Nasution, Aulia M. T.

    2016-11-01

    Image obtained by the LED with wavelength 740nm and 810nm showed that the contrast gradient of vein pattern is low and palm pattern still exist. It means that 740nm and 810nm are less suitable for the detection of blood vessels in the palm of the hand. At a wavelength of 940nm, the pattern is clearly visible, and the pattern of the palms is mostly gone. Furthermore, the pre-processing performed using smoothing process which include Gaussian filter and median filter and contrast stretching. Image segmentation is done by getting the ROI area that would be obtained its information. The identification process of image features obtained by using MSE (Mean Suare Error) method ,LBP (Local Binary Pattern). Furthermore, we will use a database consists of 5 different palm vein pattern which will be used for testing the tool in the identification process. All the process above are done using Raspberry Pi device. The Obtained MSE parameter is 0.025 and LBP features score are less than 10-3 for image to be matched.

  7. Multiscale entropy analysis of biological signals: a fundamental bi-scaling law

    PubMed Central

    Gao, Jianbo; Hu, Jing; Liu, Feiyan; Cao, Yinhe

    2015-01-01

    Since introduced in early 2000, multiscale entropy (MSE) has found many applications in biosignal analysis, and been extended to multivariate MSE. So far, however, no analytic results for MSE or multivariate MSE have been reported. This has severely limited our basic understanding of MSE. For example, it has not been studied whether MSE estimated using default parameter values and short data set is meaningful or not. Nor is it known whether MSE has any relation with other complexity measures, such as the Hurst parameter, which characterizes the correlation structure of the data. To overcome this limitation, and more importantly, to guide more fruitful applications of MSE in various areas of life sciences, we derive a fundamental bi-scaling law for fractal time series, one for the scale in phase space, the other for the block size used for smoothing. We illustrate the usefulness of the approach by examining two types of physiological data. One is heart rate variability (HRV) data, for the purpose of distinguishing healthy subjects from patients with congestive heart failure, a life-threatening condition. The other is electroencephalogram (EEG) data, for the purpose of distinguishing epileptic seizure EEG from normal healthy EEG. PMID:26082711

  8. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    PubMed

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2017-06-01

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors (RSS) from the three quantitative analyses were compared. In methane gas analysis, SWLS yielded the lowest SEP and RSS among the three methods. In methane/toluene mixture gas analysis, a modification of the SWLS has been presented to tackle the bias error from other components. The SWLS without modification presents the lowest SEP in all cases but not bias and RSS. The modification of SWLS reduced the bias, which showed a lower RSS than CLS, especially for small components.

  9. Microwave Photonic Architecture for Direction Finding of LPI Emitters: Post-Processing for Angle of Arrival Estimation

    DTIC Science & Technology

    2016-09-01

    mean- square (RMS) error of 0.29° at ə° resolution. For a P4 coded signal, the RMS error in estimating the AOA is 0.32° at 1° resolution. 14...FMCW signal, it was demonstrated that the system is capable of estimating the AOA with a root-mean- square (RMS) error of 0.29° at ə° resolution. For a...Modulator PCB printed circuit board PD photodetector RF radio frequency RMS root-mean- square xvi THIS PAGE INTENTIONALLY LEFT BLANK xvii

  10. Correlations between the Signal Complexity of Cerebral and Cardiac Electrical Activity: A Multiscale Entropy Analysis

    PubMed Central

    Lin, Pei-Feng; Lo, Men-Tzung; Tsao, Jenho; Chang, Yi-Chung; Lin, Chen; Ho, Yi-Lwun

    2014-01-01

    The heart begins to beat before the brain is formed. Whether conventional hierarchical central commands sent by the brain to the heart alone explain all the interplay between these two organs should be reconsidered. Here, we demonstrate correlations between the signal complexity of brain and cardiac activity. Eighty-seven geriatric outpatients with healthy hearts and varied cognitive abilities each provided a 24-hour electrocardiography (ECG) and a 19-channel eye-closed routine electroencephalography (EEG). Multiscale entropy (MSE) analysis was applied to three epochs (resting-awake state, photic stimulation of fast frequencies (fast-PS), and photic stimulation of slow frequencies (slow-PS)) of EEG in the 1–58 Hz frequency range, and three RR interval (RRI) time series (awake-state, sleep and that concomitant with the EEG) for each subject. The low-to-high frequency power (LF/HF) ratio of RRI was calculated to represent sympatho-vagal balance. With statistics after Bonferroni corrections, we found that: (a) the summed MSE value on coarse scales of the awake RRI (scales 11–20, RRI-MSE-coarse) were inversely correlated with the summed MSE value on coarse scales of the resting-awake EEG (scales 6–20, EEG-MSE-coarse) at Fp2, C4, T6 and T4; (b) the awake RRI-MSE-coarse was inversely correlated with the fast-PS EEG-MSE-coarse at O1, O2 and C4; (c) the sleep RRI-MSE-coarse was inversely correlated with the slow-PS EEG-MSE-coarse at Fp2; (d) the RRI-MSE-coarse and LF/HF ratio of the awake RRI were correlated positively to each other; (e) the EEG-MSE-coarse at F8 was proportional to the cognitive test score; (f) the results conform to the cholinergic hypothesis which states that cognitive impairment causes reduction in vagal cardiac modulation; (g) fast-PS significantly lowered the EEG-MSE-coarse globally. Whether these heart-brain correlations could be fully explained by the central autonomic network is unknown and needs further exploration. PMID:24498375

  11. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.

  12. Determination of suitable drying curve model for bread moisture loss during baking

    NASA Astrophysics Data System (ADS)

    Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.

    2013-03-01

    This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.

  13. Correcting Four Similar Correlational Measures for Attenuation Due to Errors of Measurement in the Dependent Variable: Eta, Epsilon, Omega, and Intraclass r.

    ERIC Educational Resources Information Center

    Stanley, Julian C.; Livingston, Samuel A.

    Besides the ubiquitous Pearson product-moment r, there are a number of other measures of relationship that are attenuated by errors of measurement and for which the relationship between true measures can be estimated. Among these are the correlation ratio (eta squared), Kelley's unbiased correlation ratio (epsilon squared), Hays' omega squared,…

  14. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    NASA Astrophysics Data System (ADS)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-04-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.

  15. Ametropia and ocular biometry in a U.K. university student population.

    PubMed

    Logan, Nicola S; Davies, Leon N; Mallen, Edward A H; Gilmartin, Bernard

    2005-04-01

    The prevalence of myopia is known to vary with age, ethnicity, level of education, and socioeconomic status, with a high prevalence reported in university students and in people from East Asian countries. This study determines the prevalence of ametropia in a mixed ethnicity U.K. university student population and compares associated ocular biometric measures. Refractive error and related ocular component data were collected on 373 first-year U.K. undergraduate students (mean age = 19.55 years +/- 2.99, range = 17-30 years) at the start of the academic year at Aston University, Birmingham, and the University of Bradford, West Yorkshire. The ethnic variation of the students was as follows: white 38.9%, British Asian 58.2%, Chinese 2.1%, and black 0.8%. Noncycloplegic refractive error was measured with an infrared open-field autorefractor, the Shin-Nippon NVision-K 5001 (Shin Nippon, Ryusyo Industrial Co. Ltd, Osaka, Japan). Myopia was defined as a mean spherical equivalent (MSE) less than or equal to -0.50 D. Hyperopia was defined as an MSE greater than or equal to +0.50 D. Axial length, corneal curvature, and anterior chamber depth were measured using the Zeiss IOLMaster (Carl Zeiss, Jena, GmBH). The analysis was carried out only for white and British Asian groups. The overall distribution of refractive error exhibited leptokurtosis, and prevalence levels were similar for white and British Asian (the predominant ethnic group) students across each ametropic group: myopia (50% vs. 53.4%), hyperopia (18.8% vs. 17.3%), and emmetropia (31.2% vs. 29.3%). There were no significant differences in the distribution of ametropia and biometric components between white and British Asian samples. The absence of a significant difference in refractive error and ocular components between white and British Asian students exposed to the same educational system is of interest. However, it is clear that a further study incorporating formal epidemiologic methods of analysis is required to address adequately the recent proposal that juvenile myopia develops principally from "myopiagenic" environments and is relatively independent of ethnicity.

  16. Quantifying the impact of respiratory-gated 4D CT acquisition on thoracic image quality: a digital phantom study.

    PubMed

    Bernatowicz, K; Keall, P; Mishra, P; Knopf, A; Lomax, A; Kipritidis, J

    2015-01-01

    Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CT can significantly reduce lung imaging artifacts. Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) "conventional" 4D CT that uses a constant imaging and couch-shift frequency, (ii) "beam paused" 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) "respiratory-gated" 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm(3) spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Averaged across all simulations and phase bins, respiratory-gating reduced overall thoracic MSE by 46% compared to conventional 4D CT (p ∼ 10(-19)). Gating leads to small but significant (p < 0.02) reductions in lung volume errors (1.8%-1.4%), false positives (4.0%-2.6%), and false negatives (2.7%-1.3%). These percentage reductions correspond to gating reducing image artifacts by 24-90 cm(3) of lung tissue. Similar to earlier studies, gating reduced patient image dose by up to 22%, but with scan time increased by up to 135%. Beam paused 4D CT did not significantly impact normal lung tissue image quality, but did yield similar dose reductions as for respiratory-gating, without the added cost in scanning time. For a typical 6 L lung, respiratory-gated 4D CT can reduce image artifacts affecting up to 90 cm(3) of normal lung tissue compared to conventional acquisition. This image improvement could have important implications for dose calculations based on 4D CT. Where image quality is less critical, beam paused 4D CT is a simple strategy to reduce imaging dose without sacrificing acquisition time.

  17. The effect of orthostatic stress on multiscale entropy of heart rate and blood pressure.

    PubMed

    Turianikova, Zuzana; Javorka, Kamil; Baumert, Mathias; Calkovska, Andrea; Javorka, Michal

    2011-09-01

    Cardiovascular control acts over multiple time scales, which introduces a significant amount of complexity to heart rate and blood pressure time series. Multiscale entropy (MSE) analysis has been developed to quantify the complexity of a time series over multiple time scales. In previous studies, MSE analyses identified impaired cardiovascular control and increased cardiovascular risk in various pathological conditions. Despite the increasing acceptance of the MSE technique in clinical research, information underpinning the involvement of the autonomic nervous system in the MSE of heart rate and blood pressure is lacking. The objective of this study is to investigate the effect of orthostatic challenge on the MSE of heart rate and blood pressure variability (HRV, BPV) and the correlation between MSE (complexity measures) and traditional linear (time and frequency domain) measures. MSE analysis of HRV and BPV was performed in 28 healthy young subjects on 1000 consecutive heart beats in the supine and standing positions. Sample entropy values were assessed on scales of 1-10. We found that MSE of heart rate and blood pressure signals is sensitive to changes in autonomic balance caused by postural change from the supine to the standing position. The effect of orthostatic challenge on heart rate and blood pressure complexity depended on the time scale under investigation. Entropy values did not correlate with the mean values of heart rate and blood pressure and showed only weak correlations with linear HRV and BPV measures. In conclusion, the MSE analysis of heart rate and blood pressure provides a sensitive tool to detect changes in autonomic balance as induced by postural change.

  18. Refined Composite Multiscale Dispersion Entropy and its Application to Biomedical Signals.

    PubMed

    Azami, Hamed; Rostaghi, Mostafa; Abasolo, Daniel; Escudero, Javier

    2017-12-01

    We propose a novel complexity measure to overcome the deficiencies of the widespread and powerful multiscale entropy (MSE), including, MSE values may be undefined for short signals, and MSE is slow for real-time applications. We introduce multiscale dispersion entropy (DisEn-MDE) as a very fast and powerful method to quantify the complexity of signals. MDE is based on our recently developed DisEn, which has a computation cost of O(N), compared with O(N 2 ) for sample entropy used in MSE. We also propose the refined composite MDE (RCMDE) to improve the stability of MDE. We evaluate MDE, RCMDE, and refined composite MSE (RCMSE) on synthetic signals and three biomedical datasets. The MDE, RCMDE, and RCMSE methods show similar results, although the MDE and RCMDE are faster, lead to more stable results, and discriminate different types of physiological signals better than MSE and RCMSE. For noisy short and long time series, MDE and RCMDE are noticeably more stable than MSE and RCMSE, respectively. For short signals, MDE and RCMDE, unlike MSE and RCMSE, do not lead to undefined values. The proposed MDE and RCMDE are significantly faster than MSE and RCMSE, especially for long signals, and lead to larger differences between physiological conditions known to alter the complexity of the physiological recordings. MDE and RCMDE are expected to be useful for the analysis of physiological signals thanks to their ability to distinguish different types of dynamics. The MATLAB codes used in this paper are freely available at http://dx.doi.org/10.7488/ds/1982.

  19. Evaluation of corrosion of metallic reinforcements and connections in MSE retaining walls.

    DOT National Transportation Integrated Search

    2008-05-01

    Mechanically Stabilized Earth (MSE) retaining walls have become the dominant retained wall system on ODOT projects. The permanent MSE walls constructed on ODOT projects, in recent years, use metallic reinforcements and facing connections buried direc...

  20. Extracting Diffusion Constants from Echo-Time-Dependent PFG NMR Data Using Relaxation-Time Information

    NASA Astrophysics Data System (ADS)

    Vandusschoten, D.; Dejager, P. A.; Vanas, H.

    Heterogeneous (bio)systems are often characterized by several water-containing compartments that differ in relaxation time values and diffusion constants. Because of the relatively small differences among these diffusion constants, nonoptimal measuring conditions easily lead to the conclusion that a single diffusion constant suffices to describe the water mobility in a heterogeneous (bio)system. This paper demonstrates that the combination of a T2 measurement and diffusion measurements at various echo times (TE), based on the PFG MSE sequence, enables the accurate determination of diffusion constants which are less than a factor of 2 apart. This new method gives errors of the diffusion constant below 10% when two fractions are present, while the standard approach of a biexponential fit to the diffusion data in identical circumstances gives larger (>25%) errors. On application of this approach to water in apple parenchyma tissue, the diffusion constant of water in the vacuole of the cells ( D = 1.7 × 10 -9 m 2/s) can be distinguished from that of the cytoplasm ( D = 1.0 × 10 -9 m 2/s). Also, for mung bean seedlings, the cell size determined by PFG MSE measurements increased from 65 to 100 μm when the echo time increased from 150 to 900 ms, demonstrating that the interpretation of PFG SE data used to investigate cell sizes is strongly dependent on the T2 values of the fractions within the sample. Because relaxation times are used to discriminate the diffusion constants, we propose to name this approach diffusion analysis by relaxation- time- separated (DARTS) PFG NMR.

  1. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  2. The motional Stark effect diagnostic for ITER using a line-shift approach.

    PubMed

    Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E

    2008-10-01

    The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.

  3. Regenerated Sciatic Nerve Axons Stimulated through a Chronically Implanted Macro-Sieve Electrode.

    PubMed

    MacEwan, Matthew R; Zellmer, Erik R; Wheeler, Jesse J; Burton, Harold; Moran, Daniel W

    2016-01-01

    Sieve electrodes provide a chronic interface for stimulating peripheral nerve axons. Yet, successful utilization requires robust axonal regeneration through the implanted electrode. The present study determined the effect of large transit zones in enhancing axonal regeneration and revealed an intimate neural interface with an implanted sieve electrode. Fabrication of the polyimide sieve electrodes employed sacrificial photolithography. The manufactured macro-sieve electrode (MSE) contained nine large transit zones with areas of ~0.285 mm 2 surrounded by eight Pt-Ir metallized electrode sites. Prior to implantation, saline, or glial derived neurotropic factor (GDNF) was injected into nerve guidance silicone-conduits with or without a MSE. The MSE assembly or a nerve guidance conduit was implanted between transected ends of the sciatic nerve in adult male Lewis rats. At 3 months post-operation, fiber counts were similar through both implant types. Likewise, stimulation of nerves regenerated through a MSE or an open silicone conduit evoked comparable muscle forces. These results showed that nerve regeneration was comparable through MSE transit zones and an open conduit. GDNF had a minimal positive effect on the quality and morphology of fibers regenerating through the MSE; thus, the MSE may reduce reliance on GDNF to augment axonal regeneration. Selective stimulation of several individual muscles was achieved through monopolar stimulation of individual electrodes sites suggesting that the MSE might be an optimal platform for functional neuromuscular stimulation.

  4. Regenerated Sciatic Nerve Axons Stimulated through a Chronically Implanted Macro-Sieve Electrode

    PubMed Central

    MacEwan, Matthew R.; Zellmer, Erik R.; Wheeler, Jesse J.; Burton, Harold; Moran, Daniel W.

    2016-01-01

    Sieve electrodes provide a chronic interface for stimulating peripheral nerve axons. Yet, successful utilization requires robust axonal regeneration through the implanted electrode. The present study determined the effect of large transit zones in enhancing axonal regeneration and revealed an intimate neural interface with an implanted sieve electrode. Fabrication of the polyimide sieve electrodes employed sacrificial photolithography. The manufactured macro-sieve electrode (MSE) contained nine large transit zones with areas of ~0.285 mm2 surrounded by eight Pt-Ir metallized electrode sites. Prior to implantation, saline, or glial derived neurotropic factor (GDNF) was injected into nerve guidance silicone-conduits with or without a MSE. The MSE assembly or a nerve guidance conduit was implanted between transected ends of the sciatic nerve in adult male Lewis rats. At 3 months post-operation, fiber counts were similar through both implant types. Likewise, stimulation of nerves regenerated through a MSE or an open silicone conduit evoked comparable muscle forces. These results showed that nerve regeneration was comparable through MSE transit zones and an open conduit. GDNF had a minimal positive effect on the quality and morphology of fibers regenerating through the MSE; thus, the MSE may reduce reliance on GDNF to augment axonal regeneration. Selective stimulation of several individual muscles was achieved through monopolar stimulation of individual electrodes sites suggesting that the MSE might be an optimal platform for functional neuromuscular stimulation. PMID:28008303

  5. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  6. A Survey of Terrain Modeling Technologies and Techniques

    DTIC Science & Technology

    2007-09-01

    Washington , DC 20314-1000 ERDC/TEC TR-08-2 ii Abstract: Test planning, rehearsal, and distributed test events for Future Combat Systems (FCS) require...distance) for all five lines of control points. Blue circles are errors of DSM (original data), red squares are DTM (bare Earth, processed by Intermap...circles are DSM, red squares are DTM ........... 8 5 Distribution of errors for line No. 729. Blue circles are DSM, red squares are DTM

  7. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng

    2006-12-01

    An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels.

  8. Interaction between drilled shaft and mechanically stabilized earth (MSE) wall : project summary.

    DOT National Transportation Integrated Search

    2015-08-31

    Drilled shafts are being constructed within the reinforced zone of mechanically stabilized earth (MSE) walls (Figure 1). The drilled shafts may be subjected to horizontal loads and push against the front of the wall. Distress of MSE wall panels has b...

  9. MSE wall void repair effect on corrosion of reinforcement - phase 2 : specialty fill materials, [summary].

    DOT National Transportation Integrated Search

    2015-06-01

    Ramps leading, for example, to overpasses or bridges are usually constructed using : mechanically stabilized earth (MSE) walls, earthworks retained by concrete walls. Because : MSE walls are reinforced with steel embedded in the fill, their fill is c...

  10. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  11. Initial Operative Experience and Short-term Hearing Preservation Results With a Mid-scala Cochlear Implant Electrode Array.

    PubMed

    Svrakic, Maja; Roland, J Thomas; McMenomey, Sean O; Svirsky, Mario A

    2016-12-01

    To describe our initial operative experience and hearing preservation results with the Advanced Bionics (AB) Mid Scala Electrode (MSE). Retrospective review. Tertiary referral center. Sixty-three MSE implants in pediatric and adult patients were compared with age- and sex-matched 1j electrode implants from the same manufacturer. All patients were severe to profoundly deaf. Cochlear implantation with either the AB 1j electrode or the AB MSE. The MSE and 1j electrodes were compared in their angular depth of insertion and pre to postoperative change in hearing thresholds. Hearing preservation was analyzed as a function of angular depth of insertion. Secondary outcome measures included operative time, incidence of abnormal intraoperative impedance and telemetry values, and incidence of postsurgical complications. Depth of insertion was similar for both electrodes, but was more consistent for the MSE array and more variable for the 1j array. Patients with MSE electrodes had better hearing preservation. Thresholds shifts at four audiometric frequencies ranging from 250 to 2000 Hz were 10, 7, 2, and 6 dB smaller for the MSE electrode than for the 1j (p < 0.05). Hearing preservation at low frequencies was worse with deeper insertion, regardless of array. Secondary outcome measures were similar for both electrodes. The MSE electrode resulted in more consistent insertion depth and somewhat better hearing preservation than the 1j electrode. Differences in other surgical outcome measures were small or unlikely to have a meaningful effect.

  12. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    NASA Astrophysics Data System (ADS)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  13. Automatic Classification of Artifactual ICA-Components for Artifact Removal in EEG Signals

    PubMed Central

    2011-01-01

    Background Artifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts. Methods We propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects. Results Based on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components. Conclusions We propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies. PMID:21810266

  14. MO-FG-CAMPUS-JeP1-05: Water Equivalent Path Length Calculations Using Scatter-Corrected Head and Neck CBCT Images to Evaluate Patients for Adaptive Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J; Park, Y; Sharp, G

    Purpose: To establish a method to evaluate the dosimetric impact of anatomic changes in head and neck patients during proton therapy by using scatter-corrected cone-beam CT (CBCT) images. Methods: The water equivalent path length (WEPL) was calculated to the distal edge of PTV contours by using tomographic images available for six head and neck patients received photon therapy. The proton range variation was measured by calculating the difference between the distal WEPLs calculated with the planning CT and weekly treatment CBCT images. By performing an automatic rigid registration, six degrees-of-freedom (DOF) correction was made to the CBCT images to accountmore » for the patient setup uncertainty. For accurate WEPL calculations, an existing CBCT scatter correction algorithm, whose performance was already proven for phantom images, was calibrated for head and neck patient images. Specifically, two different image similarity measures, mutual information (MI) and mean square error (MSE), were tested for the deformable image registration (DIR) in the CBCT scatter correction algorithm. Results: The impact of weight loss was reflected in the distal WEPL differences with the aid of the automatic rigid registration reducing the influence of patient setup uncertainty on the WEPL calculation results. The WEPL difference averaged over distal area was 2.9 ± 2.9 (mm) across all fractions of six patients and its maximum, mostly found at the last available fraction, was 6.2 ± 3.4 (mm). The MSE-based DIR successfully registered each treatment CBCT image to the planning CT image. On the other hand, the MI-based DIR deformed the skin voxels in the planning CT image to the immobilization mask in the treatment CBCT image, most of which was cropped out of the planning CT image. Conclusion: The dosimetric impact of anatomic changes was evaluated by calculating the distal WEPL difference with the existing scatter-correction algorithm appropriately calibrated. Jihun Kim, Yang-Kyun Park, Gregory Sharp, and Brian Winey have received grant support from the NCI Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center.« less

  15. Artificial neural network-genetic algorithm based optimization for the adsorption of methylene blue and brilliant green from aqueous solution by graphite oxide nanoparticle.

    PubMed

    Ghaedi, M; Zeinali, N; Ghaedi, A M; Teimuori, M; Tashkhourian, J

    2014-05-05

    In this study, graphite oxide (GO) nano according to Hummers method was synthesized and subsequently was used for the removal of methylene blue (MB) and brilliant green (BG). The detail information about the structure and physicochemical properties of GO are investigated by different techniques such as XRD and FTIR analysis. The influence of solution pH, initial dye concentration, contact time and adsorbent dosage was examined in batch mode and optimum conditions was set as pH=7.0, 2 mg of GO and 10 min contact time. Employment of equilibrium isotherm models for description of adsorption capacities of GO explore the good efficiency of Langmuir model for the best presentation of experimental data with maximum adsorption capacity of 476.19 and 416.67 for MB and BG dyes in single solution. The analysis of adsorption rate at various stirring times shows that both dyes adsorption followed a pseudo second-order kinetic model with cooperation with interparticle diffusion model. Subsequently, the adsorption data as new combination of artificial neural network was modeled to evaluate and obtain the real conditions for fast and efficient removal of dyes. A three-layer artificial neural network (ANN) model is applicable for accurate prediction of dyes removal percentage from aqueous solution by GO following conduction of 336 experimental data. The network was trained using the obtained experimental data at optimum pH with different GO amount (0.002-0.008 g) and 5-40 mg/L of both dyes over contact time of 0.5-30 min. The ANN model was able to predict the removal efficiency with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) at hidden layer with 10 and 11 neurons for MB and BG dyes, respectively. The minimum mean squared error (MSE) of 0.0012 and coefficient of determination (R(2)) of 0.982 were found for prediction and modeling of MB removal, while the respective value for BG was the MSE and R(2) of 0.001 and 0.981, respectively. The ANN model results show good agreement with experimental data. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Symmetric encryption algorithms using chaotic and non-chaotic generators: A review

    PubMed Central

    Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.

    2015-01-01

    This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561

  17. A High-Resolution Aerosol Retrieval Method for Urban Areas Using MISR Data

    NASA Astrophysics Data System (ADS)

    Moon, T.; Wang, Y.; Liu, Y.; Yu, B.

    2012-12-01

    Satellite-retrieved Aerosol Optical Depth (AOD) can provide a cost-effective way to monitor particulate air pollution without using expensive ground measurement sensors. One of the current state-of-the-art AOD retrieval method is NASA's Multi-angle Imaging SpectroRadiometer (MISR) operational algorithm, which has the spatial resolution of 17.6 km x 17.6 km. While the MISR baseline scheme already leads to exciting research opportunities to study particle compositions at regional scale, its spatial resolution is too coarse for analyzing urban areas where the AOD level has stronger spatial variations. We develop a novel high-resolution AOD retrieval algorithm that still uses MISR's radiance observations but has the resolution of 4.4km x 4.4km. We achieve the high resolution AOD retrieval by implementing a hierarchical Bayesian model and Monte-Carlo Markov Chain (MCMC) inference method. Our algorithm not only improves the spatial resolution, but also extends the coverage of AOD retrieval and provides with additional composition information of aerosol components that contribute to the AOD. We validate our method using the recent NASA's DISCOVER-AQ mission data, which contains the ground measured AOD values for Washington DC and Baltimore area. The validation result shows that, compared to the operational MISR retrievals, our scheme has 41.1% more AOD retrieval coverage for the DISCOVER-AQ data points and 24.2% improvement in mean-squared error (MSE) with respect to the AERONET ground measurements.

  18. Artificial neural network (ANN) approach for modeling of Pb(II) adsorption from aqueous solution by Antep pistachio (Pistacia Vera L.) shells.

    PubMed

    Yetilmezsoy, Kaan; Demirel, Sevgi

    2008-05-30

    A three-layer artificial neural network (ANN) model was developed to predict the efficiency of Pb(II) ions removal from aqueous solution by Antep pistachio (Pistacia Vera L.) shells based on 66 experimental sets obtained in a laboratory batch study. The effect of operational parameters such as adsorbent dosage, initial concentration of Pb(II) ions, initial pH, operating temperature, and contact time were studied to optimise the conditions for maximum removal of Pb(II) ions. On the basis of batch test results, optimal operating conditions were determined to be an initial pH of 5.5, an adsorbent dosage of 1.0 g, an initial Pb(II) concentration of 30 ppm, and a temperature of 30 degrees C. Experimental results showed that a contact time of 45 min was generally sufficient to achieve equilibrium. After backpropagation (BP) training combined with principal component analysis (PCA), the ANN model was able to predict adsorption efficiency with a tangent sigmoid transfer function (tansig) at hidden layer with 11 neurons and a linear transfer function (purelin) at output layer. The Levenberg-Marquardt algorithm (LMA) was found as the best of 11 BP algorithms with a minimum mean squared error (MSE) of 0.000227875. The linear regression between the network outputs and the corresponding targets were proven to be satisfactory with a correlation coefficient of about 0.936 for five model variables used in this study.

  19. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  20. Improved recovery of the hemodynamic response in Diffuse Optical Imaging using short optode separations and state-space modeling

    PubMed Central

    Gagnon, Louis; Perdue, Katherine; Greve, Douglas N.; Goldenholz, Daniel; Kaskhedikar, Gayatri; Boas, David A.

    2011-01-01

    Diffuse Optical Imaging (DOI) allows the recovery of the hemodynamic response associated with evoked brain activity. The signal is contaminated with systemic physiological interference which occurs in the superficial layers of the head as well as in the brain tissue. The back-reflection geometry of the measurement makes the DOI signal strongly contaminated by systemic interference occurring in the superficial layers. A recent development has been the use of signals from small source-detector separation (1 cm) optodes as regressors. Since those additional measurements are mainly sensitive to superficial layers in adult humans, they help in removing the systemic interference present in longer separation measurements (3 cm). Encouraged by those findings, we developed a dynamic estimation procedure to remove global interference using small optode separations and to estimate simultaneously the hemodynamic response. The algorithm was tested by recovering a simulated synthetic hemodynamic response added over baseline DOI data acquired from 6 human subjects at rest. The performance of the algorithm was quantified by the Pearson R2 coefficient and the mean square error (MSE) between the recovered and the simulated hemodynamic responses. Our dynamic estimator was also compared with a static estimator and the traditional adaptive filtering method. We observed a significant improvement (two-tailed paired t-test, p < 0.05) in both HbO and HbR recovery using our Kalman filter dynamic estimator compared to the traditional adaptive filter, the static estimator and the standard GLM technique. PMID:21385616

  1. Multi-object tracking of human spermatozoa

    NASA Astrophysics Data System (ADS)

    Sørensen, Lauge; Østergaard, Jakob; Johansen, Peter; de Bruijne, Marleen

    2008-03-01

    We propose a system for tracking of human spermatozoa in phase-contrast microscopy image sequences. One of the main aims of a computer-aided sperm analysis (CASA) system is to automatically assess sperm quality based on spermatozoa motility variables. In our case, the problem of assessing sperm quality is cast as a multi-object tracking problem, where the objects being tracked are the spermatozoa. The system combines a particle filter and Kalman filters for robust motion estimation of the spermatozoa tracks. Further, the combinatorial aspect of assigning observations to labels in the particle filter is formulated as a linear assignment problem solved using the Hungarian algorithm on a rectangular cost matrix, making the algorithm capable of handling missing or spurious observations. The costs are calculated using hidden Markov models that express the plausibility of an observation being the next position in the track history of the particle labels. Observations are extracted using a scale-space blob detector utilizing the fact that the spermatozoa appear as bright blobs in a phase-contrast microscope. The output of the system is the complete motion track of each of the spermatozoa. Based on these tracks, different CASA motility variables can be computed, for example curvilinear velocity or straight-line velocity. The performance of the system is tested on three different phase-contrast image sequences of varying complexity, both by visual inspection of the estimated spermatozoa tracks and by measuring the mean squared error (MSE) between the estimated spermatozoa tracks and manually annotated tracks, showing good agreement.

  2. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  3. Prediction of composite fatigue life under variable amplitude loading using artificial neural network trained by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Rohman, Muhamad Nur; Hidayat, Mas Irfan P.; Purniawan, Agung

    2018-04-01

    Neural networks (NN) have been widely used in application of fatigue life prediction. In the use of fatigue life prediction for polymeric-base composite, development of NN model is necessary with respect to the limited fatigue data and applicable to be used to predict the fatigue life under varying stress amplitudes in the different stress ratios. In the present paper, Multilayer-Perceptrons (MLP) model of neural network is developed, and Genetic Algorithm was employed to optimize the respective weights of NN for prediction of polymeric-base composite materials under variable amplitude loading. From the simulation result obtained with two different composite systems, named E-glass fabrics/epoxy (layups [(±45)/(0)2]S), and E-glass/polyester (layups [90/0/±45/0]S), NN model were trained with fatigue data from two different stress ratios, which represent limited fatigue data, can be used to predict another four and seven stress ratios respectively, with high accuracy of fatigue life prediction. The accuracy of NN prediction were quantified with the small value of mean square error (MSE). When using 33% from the total fatigue data for training, the NN model able to produce high accuracy for all stress ratios. When using less fatigue data during training (22% from the total fatigue data), the NN model still able to produce high coefficient of determination between the prediction result compared with obtained by experiment.

  4. Impact of meteorological factors on the incidence of bacillary dysentery in Beijing, China: A time series analysis (1970-2012).

    PubMed

    Yan, Long; Wang, Hong; Zhang, Xuan; Li, Ming-Yue; He, Juan

    2017-01-01

    Influence of meteorological variables on the transmission of bacillary dysentery (BD) is under investigated topic and effective forecasting models as public health tool are lacking. This paper aimed to quantify the relationship between meteorological variables and BD cases in Beijing and to establish an effective forecasting model. A time series analysis was conducted in the Beijing area based upon monthly data on weather variables (i.e. temperature, rainfall, relative humidity, vapor pressure, and wind speed) and on the number of BD cases during the period 1970-2012. Autoregressive integrated moving average models with explanatory variables (ARIMAX) were built based on the data from 1970 to 2004. Prediction of monthly BD cases from 2005 to 2012 was made using the established models. The prediction accuracy was evaluated by the mean square error (MSE). Firstly, temperature with 2-month and 7-month lags and rainfall with 12-month lag were found positively correlated with the number of BD cases in Beijing. Secondly, ARIMAX model with covariates of temperature with 7-month lag (β = 0.021, 95% confidence interval(CI): 0.004-0.038) and rainfall with 12-month lag (β = 0.023, 95% CI: 0.009-0.037) displayed the highest prediction accuracy. The ARIMAX model developed in this study showed an accurate goodness of fit and precise prediction accuracy in the short term, which would be beneficial for government departments to take early public health measures to prevent and control possible BD popularity.

  5. Remaining lifetime modeling using State-of-Health estimation

    NASA Astrophysics Data System (ADS)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.

  6. Short message service prompted mouth self-examination in oral cancer patients as an alternative to frequent hospital-based surveillance.

    PubMed

    Vaishampayan, Sagar; Malik, Akshat; Pawar, Prashant; Arya, Kavi; Chaturvedi, Pankaj

    2017-01-01

    Oral squamous cell carcinoma (OSCC) are amongst commonest cancer in the Indian sub-continent. After treatment, these patients require frequent followup to look for recurrences/second primary. Mouth Self Examination (MSE) has a great potential in all levels of prevention of oral cancer. However, the compliance to self-examination has been reported as poor. Mobile phone is a cheap and effective way to reach out to people. Short Message Service (SMS) is extremely popular can be a very effective motivational and interactive tool in health care setting. We aimed to identify in adequately treated OSCC patients, the influence of health provider initiated SMS on the compliance to the MSE and to establish the efficacy of MSE by comparing patients' MSE interpretation via replies to the SMS with that of the experts' opinion on clinical examination status during follow up. We conclude that MSE can be very useful in adequately treated OSCC patients for evaluating disease status. All treated OSCC patients must be adequately educated for MSE as an integral part of treatment & follow-up protocol by the health provider facility. Health provider generated SMS reminders do improve motivation and compliance towards MSE but don't seem to reduce dropouts in follow up for large and diverse population like that in India.

  7. Tokamak-independent software analysis suite for multi-spectral line-polarization MSE diagnostics

    DOE PAGES

    Scott, S. D.; Mumgaard, R. T.

    2016-07-20

    A tokamak-independent analysis suite has been developed to process data from Motional Stark Effect (mse) diagnostics. The software supports multi-spectral line-polarization mse diagnostics which simultaneously measure emission at the mse σ and π lines as well as at two "background" wavelengths that are displaced from the mse spectrum by a few nanometers. This analysis accurately estimates the amplitude of partially polarized background light at the σ and π wavelengths even in situations where the background light changes rapidly in time and space, a distinct improvement over traditional "time-interpolation" background estimation. The signal amplitude at many frequencies is computed using amore » numerical-beat algorithm which allows the retardance of the mse photo-elastic modulators (pem's) to be monitored during routine operation. It also allows the use of summed intensities at multiple frequencies in the calculation of polarization direction, which increases the effective signal strength and reduces sensitivity to pem retardance drift. The software allows the polarization angles to be corrected for calibration drift using a system that illuminates the mse diagnostic with polarized light at four known polarization angles within ten seconds of a plasma discharge. As a result, the software suite is modular, parallelized, and portable to other facilities.« less

  8. Tokamak-independent software analysis suite for multi-spectral line-polarization MSE diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, S. D.; Mumgaard, R. T.

    A tokamak-independent analysis suite has been developed to process data from Motional Stark Effect (mse) diagnostics. The software supports multi-spectral line-polarization mse diagnostics which simultaneously measure emission at the mse σ and π lines as well as at two "background" wavelengths that are displaced from the mse spectrum by a few nanometers. This analysis accurately estimates the amplitude of partially polarized background light at the σ and π wavelengths even in situations where the background light changes rapidly in time and space, a distinct improvement over traditional "time-interpolation" background estimation. The signal amplitude at many frequencies is computed using amore » numerical-beat algorithm which allows the retardance of the mse photo-elastic modulators (pem's) to be monitored during routine operation. It also allows the use of summed intensities at multiple frequencies in the calculation of polarization direction, which increases the effective signal strength and reduces sensitivity to pem retardance drift. The software allows the polarization angles to be corrected for calibration drift using a system that illuminates the mse diagnostic with polarized light at four known polarization angles within ten seconds of a plasma discharge. As a result, the software suite is modular, parallelized, and portable to other facilities.« less

  9. Mouth self-examination as a screening tool for oral cancer in a high-risk group of patients with Fanconi anemia.

    PubMed

    Furquim, Camila Pinheiro; Pivovar, Allana; Cavalcanti, Laura Grein; Araújo, Renata Fuentes; Sales Bonfim, Carmem Maria; Torres-Pereira, Cassius Carvalho

    2014-10-01

    Oral cancer usually occurs at accessible sites, enabling early detection by visual inspection. Fanconi anemia (FA) is a recessive disorder associated with a high risk of developing head and neck solid tumors. The aim of this study was to assess the ability to perform mouth self-examination (MSE) in these patients. A total of 44 patients with FA, aged ≥ 18 years, were given a self-reported questionnaire to collect sociodemographic data and information about health-related behaviors and oral cancer awareness. They were asked to perform MSE, which was evaluated using criteria for mucosal visualization and retracting ability. Subsequently, an oral medicine specialist clinically examined all participants, and these findings were considered to be the gold standard. The sensitivity and specificity values of MSE were 43% and 44%, respectively. The MSE accuracy was 43%. Most patients (73%) reported that MSE was easy or very easy, although 75% showed insufficient performance. The accuracy of MSE alone is not sufficient to indicate whether MSE should be recommended as a strategy to prevent oral cancer in patients with FA. Nevertheless, the present results indicate that this inexpensive technique could be used as a tool for early detection of cancer in these patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Isothiocyanate-enriched moringa seed extract alleviates ulcerative colitis symptoms in mice

    PubMed Central

    Wu, Alex G.; Jaja-Chimedza, Asha; Graf, Brittany L.; Waterman, Carrie; Verzi, Michael P.; Raskin, Ilya

    2017-01-01

    Moringa (Moringa oleifera Lam.) seed extract (MSE) has anti-inflammatory and antioxidant activities. We investigated the effects of MSE enriched in moringa isothiocyanate-1 (MIC-1), its putative bioactive, on ulcerative colitis (UC) and its anti-inflammatory/antioxidant mechanism likely mediated through Nrf2-signaling pathway. Dextran sulfate sodium (DSS)-induced acute (n = 8/group; 3% DSS for 5 d) and chronic (n = 6/group; cyclic rotations of 2.5% DSS/water for 30 d) UC was induced in mice that were assigned to 4 experimental groups: healthy control (water/vehicle), disease control (DSS/vehicle), MSE treatment (DSS/MSE), or 5-aminosalicyic acid (5-ASA) treatment (positive control; DSS/5-ASA). Following UC induction, water (vehicle), 150 mg/kg MSE, or 50 mg/kg 5-ASA were orally administered for 1 or 2 wks. Disease activity index (DAI), spleen/colon sizes, and colonic histopathology were measured. From colon and/or fecal samples, pro-inflammatory biomarkers, tight-junction proteins, and Nrf2-mediated enzymes were analyzed at protein and/or gene expression levels. Compared to disease control, MSE decreased DAI scores, and showed an increase in colon lengths and decrease in colon weight/length ratios in both UC models. MSE also reduced colonic inflammation/damage and histopathological scores (modestly) in acute UC. MSE decreased colonic secretions of pro-inflammatory keratinocyte-derived cytokine (KC), tumor necrosis factor (TNF)-α, nitric oxide (NO), and myeloperoxidase (MPO) in acute and chronic UC; reduced fecal lipocalin-2 in acute UC; downregulated gene expression of pro-inflammatory interleukin (IL)-1, IL-6, TNF-α, and inducible nitric oxide synthase (iNOS) in acute UC; upregulated expression of claudin-1 and ZO-1 in acute and chronic UC; and upregulated GSTP1, an Nrf2-mediated phase II detoxifying enzyme, in chronic UC. MSE was effective in mitigating UC symptoms and reducing UC-induced colonic pathologies, likely by suppressing pro-inflammatory biomarkers and increasing tight-junction proteins. This effect is consistent with Nrf2-mediated anti-inflammatory/antioxidant signaling pathway documented for other isothiocyanates similar to MIC-1. Therefore, MSE, enriched with MIC-1, may be useful in prevention and treatment of UC. PMID:28922365

  11. Isothiocyanate-enriched moringa seed extract alleviates ulcerative colitis symptoms in mice.

    PubMed

    Kim, Youjin; Wu, Alex G; Jaja-Chimedza, Asha; Graf, Brittany L; Waterman, Carrie; Verzi, Michael P; Raskin, Ilya

    2017-01-01

    Moringa (Moringa oleifera Lam.) seed extract (MSE) has anti-inflammatory and antioxidant activities. We investigated the effects of MSE enriched in moringa isothiocyanate-1 (MIC-1), its putative bioactive, on ulcerative colitis (UC) and its anti-inflammatory/antioxidant mechanism likely mediated through Nrf2-signaling pathway. Dextran sulfate sodium (DSS)-induced acute (n = 8/group; 3% DSS for 5 d) and chronic (n = 6/group; cyclic rotations of 2.5% DSS/water for 30 d) UC was induced in mice that were assigned to 4 experimental groups: healthy control (water/vehicle), disease control (DSS/vehicle), MSE treatment (DSS/MSE), or 5-aminosalicyic acid (5-ASA) treatment (positive control; DSS/5-ASA). Following UC induction, water (vehicle), 150 mg/kg MSE, or 50 mg/kg 5-ASA were orally administered for 1 or 2 wks. Disease activity index (DAI), spleen/colon sizes, and colonic histopathology were measured. From colon and/or fecal samples, pro-inflammatory biomarkers, tight-junction proteins, and Nrf2-mediated enzymes were analyzed at protein and/or gene expression levels. Compared to disease control, MSE decreased DAI scores, and showed an increase in colon lengths and decrease in colon weight/length ratios in both UC models. MSE also reduced colonic inflammation/damage and histopathological scores (modestly) in acute UC. MSE decreased colonic secretions of pro-inflammatory keratinocyte-derived cytokine (KC), tumor necrosis factor (TNF)-α, nitric oxide (NO), and myeloperoxidase (MPO) in acute and chronic UC; reduced fecal lipocalin-2 in acute UC; downregulated gene expression of pro-inflammatory interleukin (IL)-1, IL-6, TNF-α, and inducible nitric oxide synthase (iNOS) in acute UC; upregulated expression of claudin-1 and ZO-1 in acute and chronic UC; and upregulated GSTP1, an Nrf2-mediated phase II detoxifying enzyme, in chronic UC. MSE was effective in mitigating UC symptoms and reducing UC-induced colonic pathologies, likely by suppressing pro-inflammatory biomarkers and increasing tight-junction proteins. This effect is consistent with Nrf2-mediated anti-inflammatory/antioxidant signaling pathway documented for other isothiocyanates similar to MIC-1. Therefore, MSE, enriched with MIC-1, may be useful in prevention and treatment of UC.

  12. Suitability of Spatial Interpolation Techniques in Varying Aquifer Systems of a Basaltic Terrain for Monitoring Groundwater Availability

    NASA Astrophysics Data System (ADS)

    Katpatal, Y. B.; Paranjpe, S. V.; Kadu, M. S.

    2017-12-01

    Geological formations act as aquifer systems and variability in the hydrological properties of aquifers have control over groundwater occurrence and dynamics. To understand the groundwater availability in any terrain, spatial interpolation techniques are widely used. It has been observed that, with varying hydrogeological conditions, even in a geologically homogenous set up, there are large variations in observed groundwater levels. Hence, the accuracy of groundwater estimation depends on the use of appropriate interpretation techniques. The study area of the present study is Venna Basin of Maharashtra State, India which is a basaltic terrain with four different types of basaltic layers laid down horizontally; weathered vesicular basalt, weathered and fractured basalt, highly weathered unclassified basalt and hard massive basalt. The groundwater levels vary with topography as different types of basalts are present at varying depths. The local stratigraphic profiles were generated at different types of basaltic terrains. The present study aims to interpolate the groundwater levels within the basin and to check the co-relation between the estimated and the observed values. The groundwater levels for 125 observation wells situated in these different basaltic terrains for 20 years (1995 - 2015) have been used in the study. The interpolation was carried out in Geographical Information System (GIS) using ordinary kriging and Inverse Distance Weight (IDW) method. A comparative analysis of the interpolated values of groundwater levels is carried out for validating the recorded groundwater level dataset. The results were co-related to various types of basaltic terrains present in basin forming the aquifer systems. Mean Error (ME) and Mean Square Errors (MSE) have been computed and compared. It was observed that within the interpolated values, a good correlation does not exist between the two interpolation methods used. The study concludes that in crystalline basaltic terrain, interpolation methods must be verified with the changes in the geological profiles.

  13. Seasonal variation in onset and relapse of IBD and a model to predict the frequency of onset, relapse, and severity of IBD based on artificial neural network.

    PubMed

    Peng, Jiang Chen; Ran, Zhi Hua; Shen, Jun

    2015-09-01

    Previous research has yielded conflicting data as to whether the natural history of inflammatory bowel disease follows a seasonal pattern. The purpose of this study was (1) to determine whether the frequency of onset and relapse of inflammatory bowel disease follows a seasonal pattern and (2) to establish a model to predict the frequency of onset, relapse, and severity of inflammatory bowel disease (IBD) with meteorological data based on artificial neural network (ANN). Patients with diagnosis of ulcerative colitis (UC) or Crohn's disease (CD) between 2003 and 2011 were investigated according to the occurrence of onset and flares of symptoms. The expected onset or relapse was calculated on a monthly basis over the study period. For artificial neural network (ANN), patients from 2003 to 2010 were assigned as training cohort and patients in 2011 were assigned as validation cohort. Mean square error (MSE) and mean absolute percentage error (MAPE) were used to evaluate the predictive accuracy. We found no seasonal pattern of onset (P = 0.248) and relapse (P = 0.394) among UC patients. But, the onset (P = 0.015) and relapse (P = 0.004) of CD were associated with seasonal pattern, with a peak in July and August. ANN had average accuracy to predict the frequency of onset (MSE = 0.076, MAPE = 37.58%) and severity of IBD (MSE = 0.065, MAPE = 42.15%) but high accuracy in predicting the frequency of relapse of IBD (MSE = 0.009, MAPE = 17.1%). The frequency of onset and relapse in IBD showed seasonality only in CD, with a peak in July and August, but not in UC. ANN may have its value in predicting the frequency of relapse among patients with IBD.

  14. Initial Operative Experience and Short Term Hearing Preservation Results with a Mid-Scala Cochlear Implant Electrode Array

    PubMed Central

    Svrakic, Maja; Roland, J. Thomas; McMenomey, Sean O.; Svirsky, Mario A.

    2016-01-01

    OBJECTIVE To describe our initial operative experience and hearing preservation results with the Advanced Bionics (AB) Mid Scala Electrode (MSE) STUDY DESIGN Retrospective review. SETTING Tertiary referral center. PATIENTS Sixty-three MSE implants in pediatric and adult patients were compared to age- and gender-matched 1j electrode implants from the same manufacturer. All patients were severe to profoundly deaf. INTERVENTION Cochlear implantation with either the AB 1j electrode or the AB MSE. MAIN OUTCOME MEASURES The MSE and 1j electrode were compared in their angular depth of insertion (aDOI) and pre- to post-operative change in hearing thresholds. Hearing preservation was analyzed as a function of aDOI. Secondary outcome measures included operative time, incidence of abnormal intraoperative impedance and telemetry values, and incidence of postsurgical complications. RESULTS Depth of insertion was similar for both electrodes, but was more consistent for the MSE array and more variable for the 1j array. Patients with MSE electrodes had better hearing preservation. Thresholds shifts at four audiometric frequencies ranging from 250 to 2,000 Hz were 10 dB, 7 dB, 2 dB and 6 dB smaller for the MSE electrode than for the 1j (p<0.05). Hearing preservation at low frequencies was worse with deeper insertion, regardless of array. Secondary outcome measures were similar for both electrodes. CONCLUSIONS The MSE electrode resulted in more consistent insertion depth and somewhat better hearing preservation than the 1j electrode. Differences in other surgical outcome measures were small or unlikely to have a meaningful effect. PMID:27755356

  15. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  16. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  17. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  18. 47 CFR 87.145 - Acceptability of transmitters for licensing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... square error which assumes zero error for the received ground earth station signal and includes the AES transmit/receive frequency reference error and the AES automatic frequency control residual errors.) The...

  19. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less

  20. Refraction during incipient presbyopia: The Aston Longitudinal Assessment of Presbyopia (ALAP) study.

    PubMed

    Laughton, Deborah S; Sheppard, Amy L; Davies, Leon N

    To investigate non-cycloplegic changes in refractive error prior to the onset of presbyopia. The Aston Longitudinal Assessment of Presbyopia (ALAP) study is a prospective 2.5 year longitudinal study, measuring objective refractive error using a binocular open-field WAM-5500 autorefractor at 6-month intervals in participants aged between 33 and 45 years. From the 58 participants recruited, 51 participants (88%) completed the final visit. At baseline, 21 participants were myopic (MSE -3.25±2.28 DS; baseline age 38.6±3.1 years) and 30 were emmetropic (MSE -0.17±0.32 DS; baseline age 39.0±2.9 years). After 2.5 years, 10% of the myopic group experienced a hypermetropic shift (≥0.50 D), 5% a myopic shift (≥0.50 D) and 85% had no significant change in refraction (<0.50 D). From the emmetropic group, 10% experienced a hypermetropic shift (≥0.50 D), 3% a myopic shift (≥0.50 D) and 87% had no significant change in refraction (<0.50 D). In terms of astigmatism vectors, other than J 45 (p<0.001), all measures remained invariant over the study period. The incidence of a myopic shift in refraction during incipient presbyopia does not appear to be as large as previously indicated by retrospective research. The changes in axis indicate ocular astigmatism tends towards the against-the-rule direction with age. The structural origin(s) of the reported myopic shift in refraction during incipient presbyopia warrants further investigation. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  1. Three filters for visualization of phase objects with large variations of phase gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagan, Arkadiusz; Antosiewicz, Tomasz J.; Szoplik, Tomasz

    2009-02-20

    We propose three amplitude filters for visualization of phase objects. They interact with the spectra of pure-phase objects in the frequency plane and are based on tangent and error functions as well as antisymmetric combination of square roots. The error function is a normalized form of the Gaussian function. The antisymmetric square-root filter is composed of two square-root filters to widen its spatial frequency spectral range. Their advantage over other known amplitude frequency-domain filters, such as linear or square-root graded ones, is that they allow high-contrast visualization of objects with large variations of phase gradients.

  2. Spiral tracing on a touchscreen is influenced by age, hand, implement, and friction.

    PubMed

    Heintz, Brittany D; Keenan, Kevin G

    2018-01-01

    Dexterity impairments are well documented in older adults, though it is unclear how these influence touchscreen manipulation. This study examined age-related differences while tracing on high- and low-friction touchscreens using the finger or stylus. 26 young and 24 older adults completed an Archimedes spiral tracing task on a touchscreen mounted on a force sensor. Root mean square error was calculated to quantify performance. Root mean square error increased by 29.9% for older vs. young adults using the fingertip, but was similar to young adults when using the stylus. Although other variables (e.g., touchscreen usage, sensation, and reaction time) differed between age groups, these variables were not related to increased error in older adults while using their fingertip. Root mean square error also increased on the low-friction surface for all subjects. These findings suggest that utilizing a stylus and increasing surface friction may improve touchscreen use in older adults.

  3. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  4. Studying the dynamics of interbeat interval time series of healthy and congestive heart failure subjects using scale based symbolic entropy analysis

    PubMed Central

    Awan, Imtiaz; Aziz, Wajid; Habib, Nazneen; Alowibdi, Jalal S.; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali

    2018-01-01

    Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features. PMID:29771977

  5. Studying the dynamics of interbeat interval time series of healthy and congestive heart failure subjects using scale based symbolic entropy analysis.

    PubMed

    Awan, Imtiaz; Aziz, Wajid; Shah, Imran Hussain; Habib, Nazneen; Alowibdi, Jalal S; Saeed, Sharjil; Nadeem, Malik Sajjad Ahmed; Shah, Syed Ahsin Ali

    2018-01-01

    Considerable interest has been devoted for developing a deeper understanding of the dynamics of healthy biological systems and how these dynamics are affected due to aging and disease. Entropy based complexity measures have widely been used for quantifying the dynamics of physical and biological systems. These techniques have provided valuable information leading to a fuller understanding of the dynamics of these systems and underlying stimuli that are responsible for anomalous behavior. The single scale based traditional entropy measures yielded contradictory results about the dynamics of real world time series data of healthy and pathological subjects. Recently the multiscale entropy (MSE) algorithm was introduced for precise description of the complexity of biological signals, which was used in numerous fields since its inception. The original MSE quantified the complexity of coarse-grained time series using sample entropy. The original MSE may be unreliable for short signals because the length of the coarse-grained time series decreases with increasing scaling factor τ, however, MSE works well for long signals. To overcome the drawback of original MSE, various variants of this method have been proposed for evaluating complexity efficiently. In this study, we have proposed multiscale normalized corrected Shannon entropy (MNCSE), in which instead of using sample entropy, symbolic entropy measure NCSE has been used as an entropy estimate. The results of the study are compared with traditional MSE. The effectiveness of the proposed approach is demonstrated using noise signals as well as interbeat interval signals from healthy and pathological subjects. The preliminary results of the study indicate that MNCSE values are more stable and reliable than original MSE values. The results show that MNCSE based features lead to higher classification accuracies in comparison with the MSE based features.

  6. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  7. Weighted linear regression using D2H and D2 as the independent variables

    Treesearch

    Hans T. Schreuder; Michael S. Williams

    1998-01-01

    Several error structures for weighted regression equations used for predicting volume were examined for 2 large data sets of felled and standing loblolly pine trees (Pinus taeda L.). The generally accepted model with variance of error proportional to the value of the covariate squared ( D2H = diameter squared times height or D...

  8. The Relationship between Root Mean Square Error of Approximation and Model Misspecification in Confirmatory Factor Analysis Models

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2012-01-01

    The fit index root mean square error of approximation (RMSEA) is extremely popular in structural equation modeling. However, its behavior under different scenarios remains poorly understood. The present study generates continuous curves where possible to capture the full relationship between RMSEA and various "incidental parameters," such as…

  9. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  10. Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values

    DTIC Science & Technology

    2016-12-01

    UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square  error (MMSE)  estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem.                     3    Introduction      Minimum mean‐ square  error (MMSE) estimation is applied to target imaging with synthetic aperture

  11. Greater Exposure to Sexual Content in Popular Movies Predicts Earlier Sexual Debut and Increased Sexual Risk Taking

    PubMed Central

    O’Hara, Ross E.; Gibbons, Frederick X.; Gerrard, Meg; Li, Zhigang; Sargent, James D.

    2013-01-01

    Early sexual debut is associated with risky sexual behavior and an increased risk of unplanned pregnancy and sexually transmitted infections later in life. The relations among early movie sexual exposure (MSE), sexual debut, and risky sexual behavior in adulthood (i.e., multiple sexual partners and inconsistent condom use) were examined in a longitudinal study of U.S. adolescents. MSE was measured using the Beach method, a comprehensive procedure for media content coding. Controlling for characteristics of adolescents and their families, analyses showed that MSE predicted age of sexual debut, both directly and indirectly through changes in sensation seeking. MSE also predicted engagement in risky sexual behaviors both directly and indirectly via early sexual debut. These results suggest that MSE may promote sexual risk taking both by modifying sexual behavior and by accelerating the normal rise in sensation seeking during adolescence. PMID:22810165

  12. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  13. Renal metabolic profiling of early renal injury and renoprotective effects of Poria cocos epidermis using UPLC Q-TOF/HSMS/MSE.

    PubMed

    Zhao, Ying-Yong; Lei, Ping; Chen, Dan-Qian; Feng, Ya-Long; Bai, Xu

    2013-01-01

    Poria cocos epidermis is one of ancient traditional Chinese medicines (TCMs), which is usually used for the treatment of chronic kidney disease (CKD) for thousands of years in China. A metabonomic approach based on ultra performance liquid chromatography coupled with quadrupole time-of-flight high-sensitivity mass spectrometry (UPLC Q-TOF/HSMS) and a mass spectrometry(Elevated Energy) (MS(E)) data collection technique was developed to obtained a systematic view of the development and progression of CKD and biochemistry mechanism of therapeutic effects of P. cocos epidermis (Fu-Ling-Pi, FLP). By partial least squares-discriminate analysis, 19 metabolites were identified as potential biomarkers of CKD. Among the 19 biomarkers, 10 biomarkers including eicosapentaenoic acid, docosahexaenoic acid, lysoPC(20:4), lysoPC(18:2), lysoPC(15:0), lysoPE(20:0/0:0), indoxyl sulfate, hippuric acid, p-cresol sulfate and allantoin were reversed to the control level in FLP-treated groups. The study indicates that FLP treatment can ameliorate CKD by intervening in some dominating metabolic pathways, such as fatty acid metabolism, phospholipid metabolism, purine metabolism and tryptophan metabolism. This work was for the first time to investigate the FLP therapeutic effect based on metabonomics technology, which is a potentially powerful tool to study the TCMs. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. A quantitative longitudinal study to explore factors which influence maternal self-efficacy among Chinese primiparous women during the initial postpartum period.

    PubMed

    Zheng, Xujuan; Morrell, Jane; Watts, Kim

    2018-04-01

    parenting during infancy is highly problematic for Chinese primiparous women. As an important determinant of good parenting, maternal self-efficacy (MSE) should be paid more attention by researchers. At present, the limitations of previous research about MSE during infancy are that the factors which influence MSE remained poorly explored, there were few studies with Chinese women, and the studies did not consider the effect of different cultures. to explore factors which influence MSE in primiparous women in China in the first three months postnatally. a quantitative longitudinal study using questionnaires was conducted. In total, 420 Chinese primiparous women were recruited in obstetric wards at three hospitals in Xiamen City, Fujian Province of China. Initial baseline questionnaires to measure socio-demographic and clinical characteristics were distributed to participants face-to-face by the researcher on the postnatal ward at three days postnatally. Follow-up questionnaires at six and 12 weeks postnatally were sent via e-mail by the researcher to participants, including the Self-efficacy in Infant Care Scale (SICS), the Edinburgh Postnatal Depression Scale (EPDS) and the Postpartum Social Support Scale (PSSS) to measure MSE, postnatal depression symptoms and social support, respectively. These were returned by participants via e-mail. Quantitative data were analysed using SPSS. the variables: social support, women's satisfaction with 'Doing the month', postnatal depression, maternal education, baby health, and maternal occupation had an influence on MSE at six weeks postnatally (Adjusted R 2 = 0.510, F = 46.084, P<0.01); and the variables: postnatal depression, social support, baby health, women's satisfaction with 'Doing the month', and baby fussiness were the factors influencing MSE at 12 weeks postnatally (Adjusted R 2 = 0.485, F = 41.082, P<0.01). obstetric nurses and women's family members need to be aware of the significant contribution of social support, women's satisfaction with 'Doing the month' in positively influencing primiparous women's MSE, and the significant effect of postnatal depression symptoms in negatively impacting on first-time mothers' MSE; they should pay more attention to primiparous women with less education, unemployed mothers, women with unskilled occupations, women with an unhealthy baby, and women with a baby with a difficult temperament to improve their comparatively lower MSE levels during the initial postnatal period. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Eta Squared, Partial Eta Squared, and Misreporting of Effect Size in Communication Research.

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Hullett, Craig R.

    2002-01-01

    Alerts communication researchers to potential errors stemming from the use of SPSS (Statistical Package for the Social Sciences) to obtain estimates of eta squared in analysis of variance (ANOVA). Strives to clarify issues concerning the development and appropriate use of eta squared and partial eta squared in ANOVA. Discusses the reporting of…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendes, J.; Bessa, R.J.; Keko, H.

    Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highlymore » dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios (with spatial and/or temporal dependence). Statistical approaches to uncertainty forecasting basically consist of estimating the uncertainty based on observed forecasting errors. Quantile regression (QR) is currently a commonly used approach in uncertainty forecasting. In Chapter 3, we propose new statistical approaches to the uncertainty estimation problem by employing kernel density forecast (KDF) methods. We use two estimators in both offline and time-adaptive modes, namely, the Nadaraya-Watson (NW) and Quantilecopula (QC) estimators. We conduct detailed tests of the new approaches using QR as a benchmark. One of the major issues in wind power generation are sudden and large changes of wind power output over a short period of time, namely ramping events. In Chapter 4, we perform a comparative study of existing definitions and methodologies for ramp forecasting. We also introduce a new probabilistic method for ramp event detection. The method starts with a stochastic algorithm that generates wind power scenarios, which are passed through a high-pass filter for ramp detection and estimation of the likelihood of ramp events to happen. The report is organized as follows: Chapter 2 presents the results of the application of ITL training criteria to deterministic WPF; Chapter 3 reports the study on probabilistic WPF, including new contributions to wind power uncertainty forecasting; Chapter 4 presents a new method to predict and visualize ramp events, comparing it with state-of-the-art methodologies; Chapter 5 briefly summarizes the main findings and contributions of this report.« less

  17. Quantifying the impact of respiratory-gated 4D CT acquisition on thoracic image quality: A digital phantom study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernatowicz, K., E-mail: kingab@student.ethz.ch; Knopf, A.; Lomax, A.

    Purpose: Prospective respiratory-gated 4D CT has been shown to reduce tumor image artifacts by up to 50% compared to conventional 4D CT. However, to date no studies have quantified the impact of gated 4D CT on normal lung tissue imaging, which is important in performing dose calculations based on accurate estimates of lung volume and structure. To determine the impact of gated 4D CT on thoracic image quality, the authors developed a novel simulation framework incorporating a realistic deformable digital phantom driven by patient tumor motion patterns. Based on this framework, the authors test the hypothesis that respiratory-gated 4D CTmore » can significantly reduce lung imaging artifacts. Methods: Our simulation framework synchronizes the 4D extended cardiac torso (XCAT) phantom with tumor motion data in a quasi real-time fashion, allowing simulation of three 4D CT acquisition modes featuring different levels of respiratory feedback: (i) “conventional” 4D CT that uses a constant imaging and couch-shift frequency, (ii) “beam paused” 4D CT that interrupts imaging to avoid oversampling at a given couch position and respiratory phase, and (iii) “respiratory-gated” 4D CT that triggers acquisition only when the respiratory motion fulfills phase-specific displacement gating windows based on prescan breathing data. Our framework generates a set of ground truth comparators, representing the average XCAT anatomy during beam-on for each of ten respiratory phase bins. Based on this framework, the authors simulated conventional, beam-paused, and respiratory-gated 4D CT images using tumor motion patterns from seven lung cancer patients across 13 treatment fractions, with a simulated 5.5 cm{sup 3} spherical lesion. Normal lung tissue image quality was quantified by comparing simulated and ground truth images in terms of overall mean square error (MSE) intensity difference, threshold-based lung volume error, and fractional false positive/false negative rates. Results: Averaged across all simulations and phase bins, respiratory-gating reduced overall thoracic MSE by 46% compared to conventional 4D CT (p ∼ 10{sup −19}). Gating leads to small but significant (p < 0.02) reductions in lung volume errors (1.8%–1.4%), false positives (4.0%–2.6%), and false negatives (2.7%–1.3%). These percentage reductions correspond to gating reducing image artifacts by 24–90 cm{sup 3} of lung tissue. Similar to earlier studies, gating reduced patient image dose by up to 22%, but with scan time increased by up to 135%. Beam paused 4D CT did not significantly impact normal lung tissue image quality, but did yield similar dose reductions as for respiratory-gating, without the added cost in scanning time. Conclusions: For a typical 6 L lung, respiratory-gated 4D CT can reduce image artifacts affecting up to 90 cm{sup 3} of normal lung tissue compared to conventional acquisition. This image improvement could have important implications for dose calculations based on 4D CT. Where image quality is less critical, beam paused 4D CT is a simple strategy to reduce imaging dose without sacrificing acquisition time.« less

  18. Long-lasting effects of a new memory self-efficacy training for stroke patients: a randomized controlled trial.

    PubMed

    Aben, Laurien; Heijenbrok-Kal, Majanka H; Ponds, Rudolf W H M; Busschbach, Jan J V; Ribbers, Gerard M

    2014-01-01

    This study aims to determine the long-term effects of a new Memory Self-efficacy (MSE) training program for stroke patients on MSE, depression, and quality of life. In a randomized controlled trial, patients were allocated to a MSE training or a peer support group. Outcome measures were MSE, depression, and quality of life, measured with the Metamemory-In-Adulthood questionnaire, Center for Epidemiological Studies-Depression Scale (CES-D), and the Who-Qol Bref questionnaire, respectively. We used linear mixed models to compare the outcomes of both groups immediately after training, after 6 months, and after 12 months, adjusted for baseline. In total, 153 former inpatients from 2 rehabilitation centers were randomized-77 to the experimental and 76 to the control group. MSE increased significantly more in the experimental group and remained significantly higher than in the control group after 6 and 12 months (B = 0.42; P = .010). Psychological quality of life also increased more in the experimental group but not significantly (B = 0.09; P = .077). However, in the younger subgroup of patients (<65 years old), psychological quality of life significantly improved in the experimental group compared to the control group and remained significantly higher over time (B = 0.14; P = .030). Other outcome measures were not significantly different between both groups. An MSE training program improved MSE and psychological quality of life in stroke patients aged <65 years. These effects persisted during 12 months of follow-up.

  19. Analysis of forecasting and inventory control of raw material supplies in PT INDAC INT’L

    NASA Astrophysics Data System (ADS)

    Lesmana, E.; Subartini, B.; Riaman; Jabar, D. A.

    2018-03-01

    This study discusses the data forecasting sales of carbon electrodes at PT. INDAC INT L uses winters and double moving average methods, while for predicting the amount of inventory and cost required in ordering raw material of carbon electrode next period using Economic Order Quantity (EOQ) model. The result of error analysis shows that winters method for next period gives result of MAE, MSE, and MAPE, the winters method is a better forecasting method for forecasting sales of carbon electrode products. So that PT. INDAC INT L is advised to provide products that will be sold following the sales amount by the winters method.

  20. POOLMS: A computer program for fitting and model selection for two level factorial replication-free experiments

    NASA Technical Reports Server (NTRS)

    Amling, G. E.; Holms, A. G.

    1973-01-01

    A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.

  1. Validating Clusters with the Lower Bound for Sum-of-Squares Error

    ERIC Educational Resources Information Center

    Steinley, Douglas

    2007-01-01

    Given that a minor condition holds (e.g., the number of variables is greater than the number of clusters), a nontrivial lower bound for the sum-of-squares error criterion in K-means clustering is derived. By calculating the lower bound for several different situations, a method is developed to determine the adequacy of cluster solution based on…

  2. A suggestion for computing objective function in model calibration

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang

    2014-01-01

    A parameter-optimization process (model calibration) is usually required for numerical model applications, which involves the use of an objective function to determine the model cost (model-data errors). The sum of square errors (SSR) has been widely adopted as the objective function in various optimization procedures. However, ‘square error’ calculation was found to be more sensitive to extreme or high values. Thus, we proposed that the sum of absolute errors (SAR) may be a better option than SSR for model calibration. To test this hypothesis, we used two case studies—a hydrological model calibration and a biogeochemical model calibration—to investigate the behavior of a group of potential objective functions: SSR, SAR, sum of squared relative deviation (SSRD), and sum of absolute relative deviation (SARD). Mathematical evaluation of model performance demonstrates that ‘absolute error’ (SAR and SARD) are superior to ‘square error’ (SSR and SSRD) in calculating objective function for model calibration, and SAR behaved the best (with the least error and highest efficiency). This study suggests that SSR might be overly used in real applications, and SAR may be a reasonable choice in common optimization implementations without emphasizing either high or low values (e.g., modeling for supporting resources management).

  3. The role of sympathetic and vagal cardiac control on complexity of heart rate dynamics.

    PubMed

    Silva, Luiz Eduardo Virgilio; Silva, Carlos Alberto Aguiar; Salgado, Helio Cesar; Fazan, Rubens

    2017-03-01

    Analysis of heart rate variability (HRV) by nonlinear approaches has been gaining interest due to their ability to extract additional information from heart rate (HR) dynamics that are not detectable by traditional approaches. Nevertheless, the physiological interpretation of nonlinear approaches remains unclear. Therefore, we propose long-term (60 min) protocols involving selective blockade of cardiac autonomic receptors to investigate the contribution of sympathetic and parasympathetic function upon nonlinear dynamics of HRV. Conscious male Wistar rats had their electrocardiogram (ECG) recorded under three distinct conditions: basal, selective (atenolol or atropine), or combined (atenolol plus atropine) pharmacological blockade of autonomic muscarinic or β 1 -adrenergic receptors. Time series of RR interval were assessed by multiscale entropy (MSE) and detrended fluctuation analysis (DFA). Entropy over short (1 to 5, MSE 1-5 ) and long (6 to 30, MSE 6-30 ) time scales was computed, as well as DFA scaling exponents at short (α short , 5 ≤ n ≤ 15), mid (α mid , 30 ≤ n ≤ 200), and long (α long , 200 ≤ n ≤ 1,700) window sizes. The results show that MSE 1-5 is reduced under atropine blockade and MSE 6-30 is reduced under atropine, atenolol, or combined blockade. In addition, while atropine expressed its maximal effect at scale six, the effect of atenolol on MSE increased with scale. For DFA, α short decreased during atenolol blockade, while the α mid increased under atropine blockade. Double blockade decreased α short and increased α long Results with surrogate data show that the dynamics during combined blockade is not random. In summary, sympathetic and vagal control differently affect entropy (MSE) and fractal properties (DFA) of HRV. These findings are important to guide future studies. NEW & NOTEWORTHY Although multiscale entropy (MSE) and detrended fluctuation analysis (DFA) are recognizably useful prognostic/diagnostic methods, their physiological interpretation remains unclear. The present study clarifies the effect of the cardiac autonomic control on MSE and DFA, assessed during long periods (1 h). These findings are important to help the interpretation of future studies. Copyright © 2017 the American Physiological Society.

  4. [Rapidly identify oligosaccharides in Morinda officinalis by UPLC-Q-TOF-MSE].

    PubMed

    Hao, Qing-Xiu; Kang, Li-Ping; Zhu, Shou-Dong; Yu, Yi; Hu, Ming-Hua; Ma, Fang-Li; Zhou, Jie; Guo, Lan-Ping

    2018-03-01

    In this paper, an approach was applied for separation and identification of oligosaccharides in Morinda officinalis How by Ultra performance liquid chromatography/quadrupole time-of-flight mass spectrometry (UPLC-Q-TOF-MS) with collision energy. The separation was carried out on an ACQUITY UPLC BEH Amide C₁₈(2.1mm×100 mm,1.7 μm) with gradient elution using acetonitrile(A) and water(B) containing 0.1% ammonia as mobile phase at a flow rate of 0.2 mL·min⁻¹. The column temperature was maintained at 40 °C. The information of accurate mass and characteristic fragment ion were acquired by MSE in ESI negative mode in low and high collision energy. The chemical structures and formula of oligosaccharides were obtained and identified by the software of UNIFI and Masslynx 4.1 based on the accurate mass, fragment ions, neutral losses, mass error, reference substance, isotope information, the intensity of fragments, and retention time. A total of 19 inulin oligosaccharide structures were identified including D(+)-sucrose, 1-kestose, nystose, 1F-fructofuranosyl nystose and other inulin oligosaccharides (DP 5-18). This research provided important information about the inulin oligosaccharides in M. officinalis. The results would provide scientific basis for innovative utilization of M. officinalis. Copyright© by the Chinese Pharmaceutical Association.

  5. [Formula: see text]-regularized recursive total least squares based sparse system identification for the error-in-variables.

    PubMed

    Lim, Jun-Seok; Pang, Hee-Suk

    2016-01-01

    In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.

  6. An algorithm for propagating the square-root covariance matrix in triangular form

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Choe, C. Y.

    1976-01-01

    A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.

  7. Fitting a function to time-dependent ensemble averaged data.

    PubMed

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  8. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Derivation of formulas for root-mean-square errors in location, orientation, and shape in triangulation solution of an elongated object in space

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1974-01-01

    Formulas are derived for the root-mean-square (rms) displacement, slope, and curvature errors in an azimuth-elevation image trace of an elongated object in space, as functions of the number and spacing of the input data points and the rms elevation error in the individual input data points from a single observation station. Also, formulas are derived for the total rms displacement, slope, and curvature error vectors in the triangulation solution of an elongated object in space due to the rms displacement, slope, and curvature errors, respectively, in the azimuth-elevation image traces from different observation stations. The total rms displacement, slope, and curvature error vectors provide useful measure numbers for determining the relative merits of two or more different triangulation procedures applicable to elongated objects in space.

  10. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.

  11. Genome-Wide Meta-Analysis of Myopia and Hyperopia Provides Evidence for Replication of 11 Loci

    PubMed Central

    Simpson, Claire L.; Wojciechowski, Robert; Oexle, Konrad; Murgia, Federico; Portas, Laura; Li, Xiaohui; Verhoeven, Virginie J. M.; Vitart, Veronique; Schache, Maria; Hosseini, S. Mohsen; Hysi, Pirro G.; Raffel, Leslie J.; Cotch, Mary Frances; Chew, Emily; Klein, Barbara E. K.; Klein, Ronald; Wong, Tien Yin; van Duijn, Cornelia M.; Mitchell, Paul; Saw, Seang Mei; Fossarello, Maurizio; Wang, Jie Jin; Polašek, Ozren; Campbell, Harry; Rudan, Igor; Oostra, Ben A.; Uitterlinden, André G.; Hofman, Albert; Rivadeneira, Fernando; Amin, Najaf; Karssen, Lennart C.; Vingerling, Johannes R.; Döring, Angela; Bettecken, Thomas; Bencic, Goran; Gieger, Christian; Wichmann, H.-Erich; Wilson, James F.; Venturini, Cristina; Fleck, Brian; Cumberland, Phillippa M.; Rahi, Jugnoo S.; Hammond, Chris J.; Hayward, Caroline; Wright, Alan F.; Paterson, Andrew D.; Baird, Paul N.; Klaver, Caroline C. W.; Rotter, Jerome I.; Pirastu, Mario; Meitinger, Thomas; Bailey-Wilson, Joan E.; Stambolian, Dwight

    2014-01-01

    Refractive error (RE) is a complex, multifactorial disorder characterized by a mismatch between the optical power of the eye and its axial length that causes object images to be focused off the retina. The two major subtypes of RE are myopia (nearsightedness) and hyperopia (farsightedness), which represent opposite ends of the distribution of the quantitative measure of spherical refraction. We performed a fixed effects meta-analysis of genome-wide association results of myopia and hyperopia from 9 studies of European-derived populations: AREDS, KORA, FES, OGP-Talana, MESA, RSI, RSII, RSIII and ERF. One genome-wide significant region was observed for myopia, corresponding to a previously identified myopia locus on 8q12 (p = 1.25×10−8), which has been reported by Kiefer et al. as significantly associated with myopia age at onset and Verhoeven et al. as significantly associated to mean spherical-equivalent (MSE) refractive error. We observed two genome-wide significant associations with hyperopia. These regions overlapped with loci on 15q14 (minimum p value = 9.11×10−11) and 8q12 (minimum p value 1.82×10−11) previously reported for MSE and myopia age at onset. We also used an intermarker linkage- disequilibrium-based method for calculating the effective number of tests in targeted regional replication analyses. We analyzed myopia (which represents the closest phenotype in our data to the one used by Kiefer et al.) and showed replication of 10 additional loci associated with myopia previously reported by Kiefer et al. This is the first replication of these loci using myopia as the trait under analysis. “Replication-level” association was also seen between hyperopia and 12 of Kiefer et al.'s published loci. For the loci that show evidence of association to both myopia and hyperopia, the estimated effect of the risk alleles were in opposite directions for the two traits. This suggests that these loci are important contributors to variation of refractive error across the distribution. PMID:25233373

  12. Genome-wide meta-analysis of myopia and hyperopia provides evidence for replication of 11 loci.

    PubMed

    Simpson, Claire L; Wojciechowski, Robert; Oexle, Konrad; Murgia, Federico; Portas, Laura; Li, Xiaohui; Verhoeven, Virginie J M; Vitart, Veronique; Schache, Maria; Hosseini, S Mohsen; Hysi, Pirro G; Raffel, Leslie J; Cotch, Mary Frances; Chew, Emily; Klein, Barbara E K; Klein, Ronald; Wong, Tien Yin; van Duijn, Cornelia M; Mitchell, Paul; Saw, Seang Mei; Fossarello, Maurizio; Wang, Jie Jin; Polašek, Ozren; Campbell, Harry; Rudan, Igor; Oostra, Ben A; Uitterlinden, André G; Hofman, Albert; Rivadeneira, Fernando; Amin, Najaf; Karssen, Lennart C; Vingerling, Johannes R; Döring, Angela; Bettecken, Thomas; Bencic, Goran; Gieger, Christian; Wichmann, H-Erich; Wilson, James F; Venturini, Cristina; Fleck, Brian; Cumberland, Phillippa M; Rahi, Jugnoo S; Hammond, Chris J; Hayward, Caroline; Wright, Alan F; Paterson, Andrew D; Baird, Paul N; Klaver, Caroline C W; Rotter, Jerome I; Pirastu, Mario; Meitinger, Thomas; Bailey-Wilson, Joan E; Stambolian, Dwight

    2014-01-01

    Refractive error (RE) is a complex, multifactorial disorder characterized by a mismatch between the optical power of the eye and its axial length that causes object images to be focused off the retina. The two major subtypes of RE are myopia (nearsightedness) and hyperopia (farsightedness), which represent opposite ends of the distribution of the quantitative measure of spherical refraction. We performed a fixed effects meta-analysis of genome-wide association results of myopia and hyperopia from 9 studies of European-derived populations: AREDS, KORA, FES, OGP-Talana, MESA, RSI, RSII, RSIII and ERF. One genome-wide significant region was observed for myopia, corresponding to a previously identified myopia locus on 8q12 (p = 1.25×10(-8)), which has been reported by Kiefer et al. as significantly associated with myopia age at onset and Verhoeven et al. as significantly associated to mean spherical-equivalent (MSE) refractive error. We observed two genome-wide significant associations with hyperopia. These regions overlapped with loci on 15q14 (minimum p value = 9.11×10(-11)) and 8q12 (minimum p value 1.82×10(-11)) previously reported for MSE and myopia age at onset. We also used an intermarker linkage- disequilibrium-based method for calculating the effective number of tests in targeted regional replication analyses. We analyzed myopia (which represents the closest phenotype in our data to the one used by Kiefer et al.) and showed replication of 10 additional loci associated with myopia previously reported by Kiefer et al. This is the first replication of these loci using myopia as the trait under analysis. "Replication-level" association was also seen between hyperopia and 12 of Kiefer et al.'s published loci. For the loci that show evidence of association to both myopia and hyperopia, the estimated effect of the risk alleles were in opposite directions for the two traits. This suggests that these loci are important contributors to variation of refractive error across the distribution.

  13. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting?

    PubMed Central

    McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.

    2016-01-01

    The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821

  14. Singular value decomposition based feature extraction technique for physiological signal analysis.

    PubMed

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  15. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  16. Moringa oleifera Seed Extract Alleviates Scopolamine-Induced Learning and Memory Impairment in Mice.

    PubMed

    Zhou, Juan; Yang, Wu-Shuang; Suo, Da-Qin; Li, Ying; Peng, Lu; Xu, Lan-Xi; Zeng, Kai-Yue; Ren, Tong; Wang, Ying; Zhou, Yu; Zhao, Yun; Yang, Li-Chao; Jin, Xin

    2018-01-01

    The extract of Moringa oleifera seeds has been shown to possess various pharmacological properties. In the present study, we assessed the neuropharmacological effects of 70% ethanolic M. oleifera seed extract (MSE) on cognitive impairment caused by scopolamine injection in mice using the passive avoidance and Morris water maze (MWM) tests. MSE (250 or 500 mg/kg) was administered to mice by oral gavage for 7 or 14 days, and cognitive impairment was induced by intraperitoneal injection of scopolamine (4 mg/kg) for 1 or 6 days. Mice that received scopolamine alone showed impaired learning and memory retention and considerably decreased cholinergic system reactivity and neurogenesis in the hippocampus. MSE pretreatment significantly ameliorated scopolamine-induced cognitive impairment and enhanced cholinergic system reactivity and neurogenesis in the hippocampus. Additionally, the protein expressions of phosphorylated Akt, ERK1/2, and CREB in the hippocampus were significantly decreased by scopolamine, but these decreases were reversed by MSE treatment. These results suggest that MSE-induced ameliorative cognitive effects are mediated by enhancement of the cholinergic neurotransmission system and neurogenesis via activation of the Akt, ERK1/2, and CREB signaling pathways. These findings suggest that MSE could be a potent neuropharmacological drug against amnesia, and its mechanism might be modulation of cholinergic activity via the Akt, ERK1/2, and CREB signaling pathways.

  17. Application of Rapid Visco Analyser (RVA) viscograms and chemometrics for maize hardness characterisation.

    PubMed

    Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena

    2015-04-15

    It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Influence of motion picture rating on adolescent response to movie smoking.

    PubMed

    Sargent, James D; Tanski, Susanne; Stoolmiller, Mike

    2012-08-01

    To examine the association between movie smoking exposure (MSE) and adolescent smoking according to rating category. A total of 6522 US adolescents were enrolled in a longitudinal survey conducted at 8-month intervals; 5503 subjects were followed up at 8 months, 5019 subjects at 16 months, and 4575 subjects at 24 months. MSE was estimated from 532 recent box-office hits, blocked into 3 Motion Picture Association of America rating categories: G/PG, PG-13, and R. A survival model evaluated time to smoking onset. Median MSE in PG-13-rated movies was ∼3 times higher than median MSE from R-rated movies, but their relation with smoking was essentially the same, with adjusted hazard ratios of 1.49 (95% confidence interval [CI]: 1.23-1.81) and 1.33 (95% CI: 1.23-1.81) for each additional 500 occurrences of MSE respectively. MSE from G/PG-rated movies was small and had no significant relationship with adolescent smoking. Attributable risk estimates showed that adolescent smoking would be reduced by 18% (95% CI: 14-21) if smoking in PG-13-rated movies was reduced to the fifth percentile. In comparison, making all parents maximally authoritative in their parenting would reduce adolescent smoking by 16% (95% CI: 12-19). The equivalent effect of PG-13-rated and R-rated MSE suggests it is the movie smoking that prompts adolescents to smoke, not other characteristics of R-rated movies or adolescents drawn to them. An R rating for movie smoking could substantially reduce adolescent smoking by eliminating smoking from PG-13 movies.

  19. The Relationship between Mean Square Differences and Standard Error of Measurement: Comment on Barchard (2012)

    ERIC Educational Resources Information Center

    Pan, Tianshu; Yin, Yue

    2012-01-01

    In the discussion of mean square difference (MSD) and standard error of measurement (SEM), Barchard (2012) concluded that the MSD between 2 sets of test scores is greater than 2(SEM)[superscript 2] and SEM underestimates the score difference between 2 tests when the 2 tests are not parallel. This conclusion has limitations for 2 reasons. First,…

  20. Quantified Choice of Root-Mean-Square Errors of Approximation for Evaluation and Power Analysis of Small Differences between Structural Equation Models

    ERIC Educational Resources Information Center

    Li, Libo; Bentler, Peter M.

    2011-01-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of…

  1. Theoretical basis, principles of design, and experimental study of the prototype of perfect AFCS transmitting signals without coding

    NASA Astrophysics Data System (ADS)

    Platonov, A.; Zaitsev, Ie.; Opalski, L. J.

    2017-08-01

    The paper presents an overview of design methodology and results of experiments with a Prototype of highly efficient optimal adaptive feedback communication systems (AFCS), transmitting low frequency analog signals without coding. The paper emphasizes the role of the forward transmitter saturation as the factor that blocked implementation of theoretical results of pioneer (1960-1970s) and later research on FCS. Deepened analysis of the role of statistical fitting condition in adequate formulation and solution of AFCS optimization task is given. Solution of the task - optimal transmission/reception algorithms is presented in the form useful for elaboration of the hardware/software Prototype. A notable particularity of the Prototype is absence of the encoding/decoding units, whose functions are realized by the adaptive pulse amplitude modulator (PAM) of the forward transmitter (FT) and estimating/controlling algorithm in the receiver of base station (BS). Experiments confirm that the Prototype transmits signals from FT to BS "perfectly": with the bit rate equal to the capacity of the system, and with limit energy [J/bit] and spectral [bps/Hz] efficiency. Another, not less important and confirmed experimentally, particularity of AFCS is its capability to adjust parameters of FT and BS to the characteristics of scenario of application and maintain the ideal regime of transmission including spectralenergy efficiency. AFCS adjustment can be made using BS estimates of mean square error (MSE). The concluding part of the paper contains discussion of the presented results, stressing capability of AFCS to solve problems appearing in development of dense wireless networks.

  2. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    PubMed

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  3. Fuzzy rule-based forecast of meteorological drought in western Niger

    NASA Astrophysics Data System (ADS)

    Abdourahamane, Zakari Seybou; Acar, Reşat

    2018-01-01

    Understanding the causes of rainfall anomalies in the West African Sahel to effectively predict drought events remains a challenge. The physical mechanisms that influence precipitation in this region are complex, uncertain, and imprecise in nature. Fuzzy logic techniques are renowned to be highly efficient in modeling such dynamics. This paper attempts to forecast meteorological drought in Western Niger using fuzzy rule-based modeling techniques. The 3-month scale standardized precipitation index (SPI-3) of four rainfall stations was used as predictand. Monthly data of southern oscillation index (SOI), South Atlantic sea surface temperature (SST), relative humidity (RH), and Atlantic sea level pressure (SLP), sourced from the National Oceanic and Atmosphere Administration (NOAA), were used as predictors. Fuzzy rules and membership functions were generated using fuzzy c-means clustering approach, expert decision, and literature review. For a minimum lead time of 1 month, the model has a coefficient of determination R 2 between 0.80 and 0.88, mean square error (MSE) below 0.17, and Nash-Sutcliffe efficiency (NSE) ranging between 0.79 and 0.87. The empirical frequency distributions of the predicted and the observed drought classes are equal at the 99% of confidence level based on two-sample t test. Results also revealed the discrepancy in the influence of SOI and SLP on drought occurrence at the four stations while the effect of SST and RH are space independent, being both significantly correlated (at α < 0.05 level) to the SPI-3. Moreover, the implemented fuzzy model compared to decision tree-based forecast model shows better forecast skills.

  4. An integral design strategy combining optical system and image processing to obtain high resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  5. Modeling the spatial distribution of African buffalo (Syncerus caffer) in the Kruger National Park, South Africa

    PubMed Central

    Hughes, Kristen; Budke, Christine M.; Ward, Michael P.; Kerry, Ruth; Ingram, Ben

    2017-01-01

    The population density of wildlife reservoirs contributes to disease transmission risk for domestic animals. The objective of this study was to model the African buffalo distribution of the Kruger National Park. A secondary objective was to collect field data to evaluate models and determine environmental predictors of buffalo detection. Spatial distribution models were created using buffalo census information and archived data from previous research. Field data were collected during the dry (August 2012) and wet (January 2013) seasons using a random walk design. The fit of the prediction models were assessed descriptively and formally by calculating the root mean square error (rMSE) of deviations from field observations. Logistic regression was used to estimate the effects of environmental variables on the detection of buffalo herds and linear regression was used to identify predictors of larger herd sizes. A zero-inflated Poisson model produced distributions that were most consistent with expected buffalo behavior. Field data confirmed that environmental factors including season (P = 0.008), vegetation type (P = 0.002), and vegetation density (P = 0.010) were significant predictors of buffalo detection. Bachelor herds were more likely to be detected in dense vegetation (P = 0.005) and during the wet season (P = 0.022) compared to the larger mixed-sex herds. Static distribution models for African buffalo can produce biologically reasonable results but environmental factors have significant effects and therefore could be used to improve model performance. Accurate distribution models are critical for the evaluation of disease risk and to model disease transmission. PMID:28902858

  6. Adaptive infinite impulse response system identification using modified-interior search algorithm with Lèvy flight.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar; Aggarwal, Apoorva

    2017-03-01

    In this paper, a new meta-heuristic optimization technique, called interior search algorithm (ISA) with Lèvy flight is proposed and applied to determine the optimal parameters of an unknown infinite impulse response (IIR) system for the system identification problem. ISA is based on aesthetics, which is commonly used in interior design and decoration processes. In ISA, composition phase and mirror phase are applied for addressing the nonlinear and multimodal system identification problems. System identification using modified-ISA (M-ISA) based method involves faster convergence, single parameter tuning and does not require derivative information because it uses a stochastic random search using the concepts of Lèvy flight. A proper tuning of control parameter has been performed in order to achieve a balance between intensification and diversification phases. In order to evaluate the performance of the proposed method, mean square error (MSE), computation time and percentage improvement are considered as the performance measure. To validate the performance of M-ISA based method, simulations has been carried out for three benchmarked IIR systems using same order and reduced order system. Genetic algorithm (GA), particle swarm optimization (PSO), cat swarm optimization (CSO), cuckoo search algorithm (CSA), differential evolution using wavelet mutation (DEWM), firefly algorithm (FFA), craziness based particle swarm optimization (CRPSO), harmony search (HS) algorithm, opposition based harmony search (OHS) algorithm, hybrid particle swarm optimization-gravitational search algorithm (HPSO-GSA) and ISA are also used to model the same examples and simulation results are compared. Obtained results confirm the efficiency of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  8. Infrared image background modeling based on improved Susan filtering

    NASA Astrophysics Data System (ADS)

    Yuehua, Xia

    2018-02-01

    When SUSAN filter is used to model the infrared image, the Gaussian filter lacks the ability of direction filtering. After filtering, the edge information of the image cannot be preserved well, so that there are a lot of edge singular points in the difference graph, increase the difficulties of target detection. To solve the above problems, the anisotropy algorithm is introduced in this paper, and the anisotropic Gauss filter is used instead of the Gauss filter in the SUSAN filter operator. Firstly, using anisotropic gradient operator to calculate a point of image's horizontal and vertical gradient, to determine the long axis direction of the filter; Secondly, use the local area of the point and the neighborhood smoothness to calculate the filter length and short axis variance; And then calculate the first-order norm of the difference between the local area of the point's gray-scale and mean, to determine the threshold of the SUSAN filter; Finally, the built SUSAN filter is used to convolution the image to obtain the background image, at the same time, the difference between the background image and the original image is obtained. The experimental results show that the background modeling effect of infrared image is evaluated by Mean Squared Error (MSE), Structural Similarity (SSIM) and local Signal-to-noise Ratio Gain (GSNR). Compared with the traditional filtering algorithm, the improved SUSAN filter has achieved better background modeling effect, which can effectively preserve the edge information in the image, and the dim small target is effectively enhanced in the difference graph, which greatly reduces the false alarm rate of the image.

  9. Optimization of thermal conductivity lightweight brick type AAC (Autoclaved Aerated Concrete) effect of Si & Ca composition by using Artificial Neural Network (ANN)

    NASA Astrophysics Data System (ADS)

    Zulkifli; Wiryawan, G. P.

    2018-03-01

    Lightweight brick is the most important component of building construction, therefore it is necessary to have lightweight thermal, mechanical and aqustic thermal properties that meet the standard, in this paper which is discussed is the domain of light brick thermal conductivity properties. The advantage of lightweight brick has a low density (500-650 kg/m3), more economical, can reduce the load 30-40% compared to conventional brick (clay brick). In this research, Artificial Neural Network (ANN) is used to predict the thermal conductivity of lightweight brick type Autoclaved Aerated Concrete (AAC). Based on the training and evaluation that have been done on 10 model of ANN with number of hidden node 1 to 10, obtained that ANN with 3 hidden node have the best performance. It is known from the mean value of MSE (Mean Square Error) validation for three training times of 0.003269. This ANN was further used to predict the thermal conductivity of four light brick samples. The predicted results for each of the AAC1, AAC2, AAC3 and AAC4 light brick samples were 0.243 W/m.K, respectively; 0.29 W/m.K; 0.32 W/m.K; and 0.32 W/m.K. Furthermore, ANN is used to determine the effect of silicon composition (Si), Calcium (Ca), to light brick thermal conductivity. ANN simulation results show that the thermal conductivity increases with increasing Si composition. Si content is allowed maximum of 26.57%, while the Ca content in the range 20.32% - 30.35%.

  10. Registration of 3D spectral OCT volumes combining ICP with a graph-based approach

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.; Sonka, Milan

    2012-02-01

    The introduction of spectral Optical Coherence Tomography (OCT) scanners has enabled acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D-OCT is used to detect and manage eye diseases such as glaucoma and age-related macular degeneration. To follow-up patients over time, image registration is a vital tool to enable more precise, quantitative comparison of disease states. In this work we present a 3D registrationmethod based on a two-step approach. In the first step we register both scans in the XY domain using an Iterative Closest Point (ICP) based algorithm. This algorithm is applied to vessel segmentations obtained from the projection image of each scan. The distance minimized in the ICP algorithm includes measurements of the vessel orientation and vessel width to allow for a more robust match. In the second step, a graph-based method is applied to find the optimal translation along the depth axis of the individual A-scans in the volume to match both scans. The cost image used to construct the graph is based on the mean squared error (MSE) between matching A-scans in both images at different translations. We have applied this method to the registration of Optic Nerve Head (ONH) centered 3D-OCT scans of the same patient. First, 10 3D-OCT scans of 5 eyes with glaucoma imaged in vivo were registered for a qualitative evaluation of the algorithm performance. Then, 17 OCT data set pairs of 17 eyes with known deformation were used for quantitative assessment of the method's robustness.

  11. Healing effects of Musa sapientum var. paradisiaca in diabetic rats with co-occurring gastric ulcer: cytokines and growth factor by PCR amplification.

    PubMed

    Kumar, Mohan; Gautam, Manish Kumar; Singh, Amit; Goel, Raj Kumar

    2013-11-05

    The present study evaluates the effects of extract of Musa sapientum fruit (MSE) on ulcer index, blood glucose level and gastric mucosal cytokines, TNF-α and IL-1β and growth factor, TGF-α (affected in diabetes and chronic ulcer) in acetic acid (AA)-induced gastric ulcer (GU) in diabetic (DR) rat. MSE (100 mg/kg, oral), omeprazole (OMZ, 2.0 mg/kg, oral), insulin (INS, 4 U/kg, sc) or pentoxyphylline (PTX, 10 mg/kg, oral) were given once daily for 10 days in 14 days post-streptozotocin (60 mg/kg, intraperitoneal)-induced diabetic rats while, the normal/diabetic rats received CMC for the same period after induction of GU with AA. Ulcer index was calculated based upon the product of length and width (mm2/rat) of ulcers while, TNF-α, IL-1β and TGF-α were estimated in the gastric mucosal homogenate from the intact/ulcer region. Phytochemical screening and HPTLC analysis of MSE was done following standard procedures. An increase in ulcer index, TNF-α and IL-1β were observed in normal (NR)-AA rat compared to NR-normal saline rat, which were further increased in DR-AA rat while, treatments of DR-AA rat with MSE, OMZ, INS and PTX reversed them, more so with MSE and PTX. Significant increase in TGF-α was found in NR-AA rat which did not increase further in DR-AA rat. MSE and PTX tended to increase while, OMZ and INS showed little or no effect on TGF-α in AA-DR rat. Phytochemical screening of MSE showed the presence of saponins, flavonoids, glycosides, steroids and alkaloids and HPTLC analysis indicated the presence of eight active compounds. MSE showed antidiabetic and better ulcer healing effects compared with OMZ (antiulcer) or INS (antidiabetic) in diabetic rat and could be more effective in diabetes with concurrent gastric ulcer.

  12. What Caused the UK's Largest Common Dolphin (Delphinus delphis) Mass Stranding Event?

    PubMed Central

    Jepson, Paul D.; Deaville, Robert; Acevedo-Whitehouse, Karina; Barnett, James; Brownlow, Andrew; Brownell Jr., Robert L.; Clare, Frances C.; Davison, Nick; Law, Robin J.; Loveridge, Jan; Macgregor, Shaheed K.; Morris, Steven; Murphy, Sinéad; Penrose, Rod; Perkins, Matthew W.; Pinn, Eunice; Seibel, Henrike; Siebert, Ursula; Sierra, Eva; Simpson, Victor; Tasker, Mark L.; Tregenza, Nick; Cunningham, Andrew A.; Fernández, Antonio

    2013-01-01

    On 9 June 2008, the UK's largest mass stranding event (MSE) of short-beaked common dolphins (Delphinus delphis) occurred in Falmouth Bay, Cornwall. At least 26 dolphins died, and a similar number was refloated/herded back to sea. On necropsy, all dolphins were in good nutritive status with empty stomachs and no evidence of known infectious disease or acute physical injury. Auditory tissues were grossly normal (26/26) but had microscopic haemorrhages (5/5) and mild otitis media (1/5) in the freshest cases. Five lactating adult dolphins, one immature male, and one immature female tested were free of harmful algal toxins and had low chemical pollutant levels. Pathological evidence of mud/seawater inhalation (11/26), local tide cycle, and the relative lack of renal myoglobinuria (26/26) suggested MSE onset on a rising tide between 06∶30 and 08∶21 hrs (9 June). Potential causes excluded or considered highly unlikely included infectious disease, gas/fat embolism, boat strike, by-catch, predator attack, foraging unusually close to shore, chemical or algal toxin exposure, abnormal weather/climatic conditions, and high-intensity acoustic inputs from seismic airgun arrays or natural sources (e.g., earthquakes). International naval exercises did occur in close proximity to the MSE with the most intense part of the exercises (including mid-frequency sonars) occurring four days before the MSE and resuming with helicopter exercises on the morning of the MSE. The MSE may therefore have been a “two-stage process” where a group of normally pelagic dolphins entered Falmouth Bay and, after 3–4 days in/around the Bay, a second acoustic/disturbance event occurred causing them to strand en masse. This spatial and temporal association with the MSE, previous associations between naval activities and cetacean MSEs, and an absence of other identifiable factors known to cause cetacean MSEs, indicates naval activity to be the most probable cause of the Falmouth Bay MSE. PMID:23646103

  13. [Locally weighted least squares estimation of DPOAE evoked by continuously sweeping primaries].

    PubMed

    Han, Xiaoli; Fu, Xinxing; Cui, Jie; Xiao, Ling

    2013-12-01

    Distortion product otoacoustic emission (DPOAE) signal can be used for diagnosis of hearing loss so that it has an important clinical value. Continuously using sweeping primaries to measure DPOAE provides an efficient tool to record DPOAE data rapidly when DPOAE is measured in a large frequency range. In this paper, locally weighted least squares estimation (LWLSE) of 2f1-f2 DPOAE is presented based on least-squares-fit (LSF) algorithm, in which DPOAE is evoked by continuously sweeping tones. In our study, we used a weighted error function as the loss function and the weighting matrixes in the local sense to obtain a smaller estimated variance. Firstly, ordinary least squares estimation of the DPOAE parameters was obtained. Then the error vectors were grouped and the different local weighting matrixes were calculated in each group. And finally, the parameters of the DPOAE signal were estimated based on least squares estimation principle using the local weighting matrixes. The simulation results showed that the estimate variance and fluctuation errors were reduced, so the method estimates DPOAE and stimuli more accurately and stably, which facilitates extraction of clearer DPOAE fine structure.

  14. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information; with a section on theory and application of generalized least squares

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1987-01-01

    This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.

  15. Quantitative comparison of the spreading and invasion of radial growth phase and metastatic melanoma cells in a three-dimensional human skin equivalent model.

    PubMed

    Haridas, Parvathi; McGovern, Jacqui A; McElwain, Sean D L; Simpson, Matthew J

    2017-01-01

    Standard two-dimensional (2D) cell migration assays do not provide information about vertical invasion processes, which are critical for melanoma progression. We provide information about three-dimensional (3D) melanoma cell migration, proliferation and invasion in a 3D melanoma skin equivalent (MSE) model. In particular, we pay careful attention to compare the structure of the tissues in the MSE with similarly-prepared 3D human skin equivalent (HSE) models. The HSE model is identically prepared to the MSE model except that melanoma cells are omitted. Using the MSE model, we examine melanoma migration, proliferation and invasion from two different human melanoma cell lines. One cell line, WM35, is associated with the early phase of the disease where spreading is thought to be confined to the epidermis. The other cell line, SK-MEL-28, is associated with the later phase of the disease where spreading into the dermis is expected. 3D MSE and HSE models are constructed using human de-epidermised dermis (DED) prepared from skin tissue. Primary fibroblasts and primary keratinocytes are used in the MSE and HSE models to ensure the formation of a stratified epidermis, with a well-defined basement membrane. Radial spreading of cells across the surface of the HSE and MSE models is observed. Vertical invasion of melanoma cells downward through the skin is observed and measured using immunohistochemistry. All measurements of invasion are made at day 0, 9, 15 and 20, providing detailed time course data. Both HSE and MSE models are similar to native skin in vivo , with a well-defined stratification of the epidermis that is separated from the dermis by a basement membrane. In the HSE and MSE we find fibroblast cells confined to the dermis, and differentiated keratinocytes in the epidermis. In the MSE, melanoma cells form colonies in the epidermis during the early part of the experiment. In the later stage of the experiment, the melanoma cells in the MSE invade deeper into the tissues. Interestingly, both the WM35 and SK-MEL-28 melanoma cells lead to a breakdown of the basement membrane and eventually enter the dermis. However, these two cell lines invade at different rates, with the SK-MEL-28 melanoma cells invading faster than the WM35 cells. The MSE and HSE models are a reliable platform for studying melanoma invasion in a 3D tissue that is similar to native human skin. Interestingly, we find that the WM35 cell line, that is thought to be associated with radial spreading only, is able to invade into the dermis. The vertical invasion of melanoma cells into the dermal region appears to be associated with a localised disruption of the basement membrane. Presenting our results in terms of time course data, along with images and quantitative measurements of the depth of invasion extends previous 3D work that has often been reported without these details.

  16. Quantitative comparison of the spreading and invasion of radial growth phase and metastatic melanoma cells in a three-dimensional human skin equivalent model

    PubMed Central

    Haridas, Parvathi; McGovern, Jacqui A.; McElwain, Sean D.L.

    2017-01-01

    Background Standard two-dimensional (2D) cell migration assays do not provide information about vertical invasion processes, which are critical for melanoma progression. We provide information about three-dimensional (3D) melanoma cell migration, proliferation and invasion in a 3D melanoma skin equivalent (MSE) model. In particular, we pay careful attention to compare the structure of the tissues in the MSE with similarly-prepared 3D human skin equivalent (HSE) models. The HSE model is identically prepared to the MSE model except that melanoma cells are omitted. Using the MSE model, we examine melanoma migration, proliferation and invasion from two different human melanoma cell lines. One cell line, WM35, is associated with the early phase of the disease where spreading is thought to be confined to the epidermis. The other cell line, SK-MEL-28, is associated with the later phase of the disease where spreading into the dermis is expected. Methods 3D MSE and HSE models are constructed using human de-epidermised dermis (DED) prepared from skin tissue. Primary fibroblasts and primary keratinocytes are used in the MSE and HSE models to ensure the formation of a stratified epidermis, with a well-defined basement membrane. Radial spreading of cells across the surface of the HSE and MSE models is observed. Vertical invasion of melanoma cells downward through the skin is observed and measured using immunohistochemistry. All measurements of invasion are made at day 0, 9, 15 and 20, providing detailed time course data. Results Both HSE and MSE models are similar to native skin in vivo, with a well-defined stratification of the epidermis that is separated from the dermis by a basement membrane. In the HSE and MSE we find fibroblast cells confined to the dermis, and differentiated keratinocytes in the epidermis. In the MSE, melanoma cells form colonies in the epidermis during the early part of the experiment. In the later stage of the experiment, the melanoma cells in the MSE invade deeper into the tissues. Interestingly, both the WM35 and SK-MEL-28 melanoma cells lead to a breakdown of the basement membrane and eventually enter the dermis. However, these two cell lines invade at different rates, with the SK-MEL-28 melanoma cells invading faster than the WM35 cells. Discussion The MSE and HSE models are a reliable platform for studying melanoma invasion in a 3D tissue that is similar to native human skin. Interestingly, we find that the WM35 cell line, that is thought to be associated with radial spreading only, is able to invade into the dermis. The vertical invasion of melanoma cells into the dermal region appears to be associated with a localised disruption of the basement membrane. Presenting our results in terms of time course data, along with images and quantitative measurements of the depth of invasion extends previous 3D work that has often been reported without these details. PMID:28890854

  17. An Application of Interactive Computer Graphics to the Study of Inferential Statistics and the General Linear Model

    DTIC Science & Technology

    1991-09-01

    matrix, the Regression Sum of Squares (SSR) and Error Sum of Squares (SSE) are also displayed as a percentage of the Total Sum of Squares ( SSTO ...vector when the student compares the SSR to the SSE. In addition to the plot, the actual values of SSR, SSE, and SSTO are also provided. Figure 3 gives the...Es ainSpace = E 3 Error- Eor Space =n t! L . Pro~cio q Yonto Pro~rct on of Y onto the simaton, pac ror Space SSR SSEL0.20 IV = 14,1 +IErrorI 2 SSTO

  18. Forecasting of primary energy consumption data in the United States: A comparison between ARIMA and Holter-Winters models

    NASA Astrophysics Data System (ADS)

    Rahman, A.; Ahmar, A. S.

    2017-09-01

    This research has a purpose to compare ARIMA Model and Holt-Winters Model based on MAE, RSS, MSE, and RMS criteria in predicting Primary Energy Consumption Total data in the US. The data from this research ranges from January 1973 to December 2016. This data will be processed by using R Software. Based on the results of data analysis that has been done, it is found that the model of Holt-Winters Additive type (MSE: 258350.1) is the most appropriate model in predicting Primary Energy Consumption Total data in the US. This model is more appropriate when compared with Holt-Winters Multiplicative type (MSE: 262260,4) and ARIMA Seasonal model (MSE: 723502,2).

  19. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    NASA Astrophysics Data System (ADS)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  20. Identification and compensation of the temperature influences in a miniature three-axial accelerometer based on the least squares method

    NASA Astrophysics Data System (ADS)

    Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae

    2017-06-01

    The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.

  1. Accurate phase extraction algorithm based on Gram–Schmidt orthonormalization and least square ellipse fitting method

    NASA Astrophysics Data System (ADS)

    Lei, Hebing; Yao, Yong; Liu, Haopeng; Tian, Yiting; Yang, Yanfu; Gu, Yinglong

    2018-06-01

    An accurate algorithm by combing Gram-Schmidt orthonormalization and least square ellipse fitting technology is proposed, which could be used for phase extraction from two or three interferograms. The DC term of background intensity is suppressed by subtraction operation on three interferograms or by high-pass filter on two interferograms. Performing Gram-Schmidt orthonormalization on pre-processing interferograms, the phase shift error is corrected and a general ellipse form is derived. Then the background intensity error and the corrected error could be compensated by least square ellipse fitting method. Finally, the phase could be extracted rapidly. The algorithm could cope with the two or three interferograms with environmental disturbance, low fringe number or small phase shifts. The accuracy and effectiveness of the proposed algorithm are verified by both of the numerical simulations and experiments.

  2. Estimating and testing interactions when explanatory variables are subject to non-classical measurement error.

    PubMed

    Murad, Havi; Kipnis, Victor; Freedman, Laurence S

    2016-10-01

    Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.

  3. Intrinsic Raman spectroscopy for quantitative biological spectroscopy Part II

    PubMed Central

    Bechtel, Kate L.; Shih, Wei-Chuan; Feld, Michael S.

    2009-01-01

    We demonstrate the effectiveness of intrinsic Raman spectroscopy (IRS) at reducing errors caused by absorption and scattering. Physical tissue models, solutions of varying absorption and scattering coefficients with known concentrations of Raman scatterers, are studied. We show significant improvement in prediction error by implementing IRS to predict concentrations of Raman scatterers using both ordinary least squares regression (OLS) and partial least squares regression (PLS). In particular, we show that IRS provides a robust calibration model that does not increase in error when applied to samples with optical properties outside the range of calibration. PMID:18711512

  4. Decreased complexity of glucose dynamics in diabetes: evidence from multiscale entropy analysis of continuous glucose monitoring system data.

    PubMed

    Chen, Jin-Long; Chen, Pin-Fan; Wang, Hung-Ming

    2014-07-15

    Parameters of glucose dynamics recorded by the continuous glucose monitoring system (CGMS) could help in the control of glycemic fluctuations, which is important in diabetes management. Multiscale entropy (MSE) analysis has recently been developed to measure the complexity of physical and physiological time sequences. A reduced MSE complexity index indicates the increased repetition patterns of the time sequence, and, thus, a decreased complexity in this system. No study has investigated the MSE analysis of glucose dynamics in diabetes. This study was designed to compare the complexity of glucose dynamics between the diabetic patients (n = 17) and the control subjects (n = 13), who were matched for sex, age, and body mass index via MSE analysis using the CGMS data. Compared with the control subjects, the diabetic patients revealed a significant increase (P < 0.001) in the mean (diabetic patients 166.0 ± 10.4 vs. control subjects 93.3 ± 1.5 mg/dl), the standard deviation (51.7 ± 4.3 vs. 11.1 ± 0.5 mg/dl), and the mean amplitude of glycemic excursions (127.0 ± 9.2 vs. 27.7 ± 1.3 mg/dl) of the glucose levels; and a significant decrease (P < 0.001) in the MSE complexity index (5.09 ± 0.23 vs. 7.38 ± 0.28). In conclusion, the complexity of glucose dynamics is decreased in diabetes. This finding implies the reactivity of glucoregulation is impaired in the diabetic patients. Such impairment presenting as an increased regularity of glycemic fluctuating pattern could be detected by MSE analysis. Thus, the MSE complexity index could potentially be used as a biomarker in the monitoring of diabetes.

  5. Multistage electrotherapy delivered through chronically-implanted leads terminates atrial fibrillation with lower energy than a single biphasic shock.

    PubMed

    Janardhan, Ajit H; Gutbrod, Sarah R; Li, Wenwen; Lang, Di; Schuessler, Richard B; Efimov, Igor R

    The goal of this study was to develop a low-energy, implantable device-based multistage electrotherapy (MSE) to terminate atrial fibrillation (AF). Previous attempts to perform cardioversion of AF by using an implantable device were limited by the pain caused by use of a high-energy single biphasic shock (BPS). Transvenous leads were implanted into the right atrium (RA), coronary sinus, and left pulmonary artery of 14 dogs. Self-sustaining AF was induced by 6 ± 2 weeks of high-rate RA pacing. Atrial defibrillation thresholds of standard versus experimental electrotherapies were measured in vivo and studied by using optical imaging in vitro. The mean AF cycle length (CL) in vivo was 112 ± 21 ms (534 beats/min). The impedances of the RA-left pulmonary artery and RA-coronary sinus shock vectors were similar (121 ± 11 Ω vs. 126 ± 9 Ω; p = 0.27). BPS required 1.48 ± 0.91 J (165 ± 34 V) to terminate AF. In contrast, MSE terminated AF with significantly less energy (0.16 ± 0.16 J; p < 0.001) and significantly lower peak voltage (31.1 ± 19.3 V; p < 0.001). In vitro optical imaging studies found that AF was maintained by localized foci originating from pulmonary vein-left atrium interfaces. MSE Stage 1 shocks temporarily disrupted localized foci; MSE Stage 2 entrainment shocks continued to silence the localized foci driving AF; and MSE Stage 3 pacing stimuli enabled consistent RA-left atrium activation until sinus rhythm was restored. Low-energy MSE significantly reduced the atrial defibrillation thresholds compared with BPS in a canine model of AF. MSE may enable painless, device-based AF therapy. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  6. Multistage Electrotherapy Delivered Through Chronically-Implanted Leads Terminates Atrial Fibrillation With Lower Energy Than a Single Biphasic Shock

    PubMed Central

    Janardhan, Ajit H.; Gutbrod, Sarah R.; Li, Wenwen; Lang, Di; Schuessler, Richard B.; Efimov, Igor R.

    2014-01-01

    Objectives The goal of this study was to develop a low-energy, implantable device–based multistage electrotherapy (MSE) to terminate atrial fibrillation (AF). Background Previous attempts to perform cardioversion of AF by using an implantable device were limited by the pain caused by use of a high-energy single biphasic shock (BPS). Methods Transvenous leads were implanted into the right atrium (RA), coronary sinus, and left pulmonary artery of 14 dogs. Self-sustaining AF was induced by 6 ± 2 weeks of high-rate RA pacing. Atrial defibrillation thresholds of standard versus experimental electrotherapies were measured in vivo and studied by using optical imaging in vitro. Results The mean AF cycle length (CL) in vivo was 112 ± 21 ms (534 beats/min). The impedances of the RA–left pulmonary artery and RA–coronary sinus shock vectors were similar (121 ± 11 Ω vs. 126 ± 9 Ω; p = 0.27). BPS required 1.48 ± 0.91 J (165 ± 34 V) to terminate AF. In contrast, MSE terminated AF with significantly less energy (0.16 ± 0.16 J; p < 0.001) and significantly lower peak voltage (31.1 ± 19.3 V; p < 0.001). In vitro optical imaging studies found that AF was maintained by localized foci originating from pulmonary vein–left atrium interfaces. MSE Stage 1 shocks temporarily disrupted localized foci; MSE Stage 2 entrainment shocks continued to silence the localized foci driving AF; and MSE Stage 3 pacing stimuli enabled consistent RA–left atrium activation until sinus rhythm was restored. Conclusions Low-energy MSE significantly reduced the atrial defibrillation thresholds compared with BPS in a canine model of AF. MSE may enable painless, device-based AF therapy. PMID:24076284

  7. Methylseleninic acid super-activates p53-senescence cancer progression barrier in prostate lesions of Pten-knockout mouse

    PubMed Central

    Wang, Lei; Guo, Xiaolan; Wang, Ji; Jiang, Cheng; Bosland, Maarten C.; Lü, Junxuan; Deng, Yibin

    2015-01-01

    Monomethylated selenium (MM-Se) forms that are precursors of methylselenol such as methylseleninic acid (MSeA) differ in metabolism and anti-cancer activities in preclinical cell and animal models from seleno-methionine that had failed to exert preventive efficacy against prostate cancer (PCa) in North American men. Given that human PCa arises from precancerous lesions such as high-grade prostatic intraepithelial neoplasia (HG-PIN) which frequently have lost PTEN tumor suppressor permitting AKT oncogenic signaling, we tested the efficacy of MSeA to inhibit HG-PIN progression in Pten prostate specific knockout (KO) mice and assessed the mechanistic involvement of p53-mediated cellular senescence and of the androgen receptor (AR). We observed that short-term (4 weeks) oral MSeA treatment significantly increased expression of P53 and P21Cip1 proteins and senescence-associated-β-galactosidase staining, and reduced Ki-67 cell proliferation index in Pten KO prostate epithelium. Long-term (25 weeks) MSeA administration significantly suppressed HG-PIN phenotype, tumor weight, and prevented emergence of invasive carcinoma in Pten KO mice. Mechanistically, the long-term MSeA treatment not only sustained P53-mediated senescence, but also markedly reduced AKT phosphorylation and AR abundance in the Pten KO prostate. Importantly, these cellular and molecular changes were not observed in the prostate of wild type littermates which were similarly treated with MSeA. Since p53 signaling is likely to be intact in HG-PIN compared to advanced PCa, the selective super-activation of p53-mediated senescence by MSeA suggests a new paradigm of cancer chemoprevention by strengthening a cancer progression barrier through induction of irreversible senescence with additional suppression of AR and AKT oncogenic signaling. PMID:26511486

  8. Influence of Motion Picture Rating on Adolescent Response to Movie Smoking

    PubMed Central

    Tanski, Susanne; Stoolmiller, Mike

    2012-01-01

    OBJECTIVE: To examine the association between movie smoking exposure (MSE) and adolescent smoking according to rating category. METHODS: A total of 6522 US adolescents were enrolled in a longitudinal survey conducted at 8-month intervals; 5503 subjects were followed up at 8 months, 5019 subjects at 16 months, and 4575 subjects at 24 months. MSE was estimated from 532 recent box-office hits, blocked into 3 Motion Picture Association of America rating categories: G/PG, PG-13, and R. A survival model evaluated time to smoking onset. RESULTS: Median MSE in PG-13–rated movies was ∼3 times higher than median MSE from R-rated movies, but their relation with smoking was essentially the same, with adjusted hazard ratios of 1.49 (95% confidence interval [CI]: 1.23–1.81) and 1.33 (95% CI: 1.23–1.81) for each additional 500 occurrences of MSE respectively. MSE from G/PG-rated movies was small and had no significant relationship with adolescent smoking. Attributable risk estimates showed that adolescent smoking would be reduced by 18% (95% CI: 14–21) if smoking in PG-13–rated movies was reduced to the fifth percentile. In comparison, making all parents maximally authoritative in their parenting would reduce adolescent smoking by 16% (95% CI: 12–19). CONCLUSIONS: The equivalent effect of PG-13-rated and R-rated MSE suggests it is the movie smoking that prompts adolescents to smoke, not other characteristics of R-rated movies or adolescents drawn to them. An R rating for movie smoking could substantially reduce adolescent smoking by eliminating smoking from PG-13 movies. PMID:22778305

  9. Comparison of Moist Static Energy and Budget between the GCM-Simulated Madden–Julian Oscillation and Observations over the Indian Ocean and Western Pacific

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Xiaoqing; Deng, Liping

    The moist static energy (MSE) anomalies and MSE budget associated with the Madden–Julian oscillation (MJO) simulated in the Iowa State University General Circulation Model (ISUGCM) over the Indian and Pacific Oceans are compared with observations. Different phase relationships between MJO 850-hPa zonal wind, precipitation, and surface latent heat flux are simulated over the Indian Ocean and western Pacific, which are greatly influenced by the convection closure, trigger conditions, and convective momentum transport (CMT). The moist static energy builds up from the lower troposphere 15–20 days before the peak of MJO precipitation, and reaches the maximum in the middle troposphere (500–600more » hPa) near the peak of MJO precipitation. The gradual lower-tropospheric heating and moistening and the upward transport of moist static energy are important aspects of MJO events, which are documented in observational studies but poorly simulated in most GCMs. The trigger conditions for deep convection, obtained from the year-long cloud resolving model (CRM) simulations, contribute to the striking difference between ISUGCM simulations with the original and modified convection schemes and play the major role in the improved MJO simulation in ISUGCM. Additionally, the budget analysis with the ISUGCM simulations shows the increase in MJO MSE is in phase with the horizontal advection of MSE over the western Pacific, while out of phase with the horizontal advection of MSE over the Indian Ocean. However, the NCEP analysis shows that the tendency of MJO MSE is in phase with the horizontal advection of MSE over both oceans.« less

  10. Moringa oleifera Seed Extract Alleviates Scopolamine-Induced Learning and Memory Impairment in Mice

    PubMed Central

    Zhou, Juan; Yang, Wu-shuang; Suo, Da-qin; Li, Ying; Peng, Lu; Xu, Lan-xi; Zeng, Kai-yue; Ren, Tong; Wang, Ying; Zhou, Yu; Zhao, Yun; Yang, Li-chao; Jin, Xin

    2018-01-01

    The extract of Moringa oleifera seeds has been shown to possess various pharmacological properties. In the present study, we assessed the neuropharmacological effects of 70% ethanolic M. oleifera seed extract (MSE) on cognitive impairment caused by scopolamine injection in mice using the passive avoidance and Morris water maze (MWM) tests. MSE (250 or 500 mg/kg) was administered to mice by oral gavage for 7 or 14 days, and cognitive impairment was induced by intraperitoneal injection of scopolamine (4 mg/kg) for 1 or 6 days. Mice that received scopolamine alone showed impaired learning and memory retention and considerably decreased cholinergic system reactivity and neurogenesis in the hippocampus. MSE pretreatment significantly ameliorated scopolamine-induced cognitive impairment and enhanced cholinergic system reactivity and neurogenesis in the hippocampus. Additionally, the protein expressions of phosphorylated Akt, ERK1/2, and CREB in the hippocampus were significantly decreased by scopolamine, but these decreases were reversed by MSE treatment. These results suggest that MSE-induced ameliorative cognitive effects are mediated by enhancement of the cholinergic neurotransmission system and neurogenesis via activation of the Akt, ERK1/2, and CREB signaling pathways. These findings suggest that MSE could be a potent neuropharmacological drug against amnesia, and its mechanism might be modulation of cholinergic activity via the Akt, ERK1/2, and CREB signaling pathways. PMID:29740317

  11. Error Analyses of the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  12. Automatization of an inverse surface temperature modelling procedure for Greenland ice cores, developed and evaluated using nitrogen and argon isotope data measured on the Gisp2 ice core

    NASA Astrophysics Data System (ADS)

    Döring, Michael; Kobashi, Takuro; Leuenberger, Markus

    2017-04-01

    In order to study Northern Hemisphere climate interactions and variability during the Holocene, access to high resolution surface temperature records of the Greenland ice sheet is an integral condition. Surface temperature reconstruction relies on firn densification combined with gas and heat diffusion [Severinghaus et al. (1998)]. In this study we use the model developed by Schwander et al. (1997). A theoretical δ15N record is generated for different temperature scenarios and compared with measurements by minimizing the mean squared error (MSE). The goal of the presented study is an automatization of this inverse modelling procedure. To solve the inverse problem, the Holocene temperature reconstruction is implemented in three steps. First a rough first guess temperature input (prior) is constructed which serves as the starting point for the optimization. Second, a smooth solution which transects the δ15N measurement data is generated following a Monte Carlo approach. It is assumed that the smooth solution contains all long term temperature trends and (together with the accumulation rate input) drives changes in firn column height, which generate the gravitational background signal in δ15N. Finally, the smooth solution is superimposed with high frequency information directly extracted from the δ15N measurement data. Following the approach, a high resolution Holocene temperature history for the Gisp2 site was extracted (posteriori), which leads to modelled δ15N data that fits the measurements in the low permeg level (MSE) and shows excellent agreement in timing and strength of the measurement variability. To evaluate the reconstruction procedure different synthetic data experiments were conducted underlining the quality of the method. Additionally, a second firn model [Goujon et al. (2003)] was used, which leads to very similar results, that shows the robustness of the presented approach. References: Goujon, C., Barnola, J.-M., Ritz, C. (2003). Modeling the densification of polar firn including heat diffusion: Application to close-off characteristics and gas isotopic fractionation for Antarctica and Greenland sites. J. Geophys. Res.,108, NO. D24, 4792. Severinghaus, J. P., Sowers, T., Brook, E. J., Alley, R. B., and Bender, M. L. (1998). Timing of abrupt climate change at the end of the Younger Dryas interval from thermally fractionated gases in polar ice. Nature, 391:141-146. Schwander, J., Sowers, T., Barnola, J., Blunier, T., Fuchs, A., and Malaizé, B. (1997). Age scale of the air in the summit ice: implication for glacial-interglacial temperature change. J. Geophys. Res-Atmos., 102(D16):19483-19493.

  13. Traffic accident reconstruction and an approach for prediction of fault rates using artificial neural networks: A case study in Turkey.

    PubMed

    Can Yilmaz, Ali; Aci, Cigdem; Aydin, Kadir

    2016-08-17

    Currently, in Turkey, fault rates in traffic accidents are determined according to the initiative of accident experts (no speed analyses of vehicles just considering accident type) and there are no specific quantitative instructions on fault rates related to procession of accidents which just represents the type of collision (side impact, head to head, rear end, etc.) in No. 2918 Turkish Highway Traffic Act (THTA 1983). The aim of this study is to introduce a scientific and systematic approach for determination of fault rates in most frequent property damage-only (PDO) traffic accidents in Turkey. In this study, data (police reports, skid marks, deformation, crush depth, etc.) collected from the most frequent and controversial accident types (4 sample vehicle-vehicle scenarios) that consist of PDO were inserted into a reconstruction software called vCrash. Sample real-world scenarios were simulated on the software to generate different vehicle deformations that also correspond to energy-equivalent speed data just before the crash. These values were used to train a multilayer feedforward artificial neural network (MFANN), function fitting neural network (FITNET, a specialized version of MFANN), and generalized regression neural network (GRNN) models within 10-fold cross-validation to predict fault rates without using software. The performance of the artificial neural network (ANN) prediction models was evaluated using mean square error (MSE) and multiple correlation coefficient (R). It was shown that the MFANN model performed better for predicting fault rates (i.e., lower MSE and higher R) than FITNET and GRNN models for accident scenarios 1, 2, and 3, whereas FITNET performed the best for scenario 4. The FITNET model showed the second best results for prediction for the first 3 scenarios. Because there is no training phase in GRNN, the GRNN model produced results much faster than MFANN and FITNET models. However, the GRNN model had the worst prediction results. The R values for prediction of fault rates were close to 1 for all folds and scenarios. This study focuses on exhibiting new aspects and scientific approaches for determining fault rates of involvement in most frequent PDO accidents occurring in Turkey by discussing some deficiencies in THTA and without regard to initiative and/or experience of experts. This study yields judicious decisions to be made especially on forensic investigations and events involving insurance companies. Referring to this approach, injury/fatal and/or pedestrian-related accidents may be analyzed as future work by developing new scientific models.

  14. Synthesis and optimization of four bar mechanism with six design parameters

    NASA Astrophysics Data System (ADS)

    Jaiswal, Ankur; Jawale, H. P.

    2018-04-01

    Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.

  15. MSE commissioning and other major diagnostic updates on KSTAR

    NASA Astrophysics Data System (ADS)

    Ko, Jinseok; Kstar Team

    2015-11-01

    The motional Stark effect (MSE) diagnostic based on the photoelastic-modulator (PEM) approach has been commissioned for the Korea Superconducting Tokamak Advanced Research (KSTAR). The 25-channel MSE system with the polarization-preserving front optics and precise tilt-tuning narrow bandpass filters provides the spatial resolution less than 1 cm in most of the plasma cross section and about 10 millisecond of time resolution. The polarization response curves with the daily Faraday rotation correction provides reliable pitch angle profiles for the KSTAR discharges with the MSE-optimized energy combination in the three-ion-source neutral beam injection. Some major diagnostic advances such as the poloidal charge exchange spectroscopy, the improved Thomson-scatting system, and the divertor infrared TV are reported as well. Work supported by the Ministry of Science, ICT and Future Planning, Korea.

  16. Theoretical and experimental studies of error in square-law detector circuits

    NASA Technical Reports Server (NTRS)

    Stanley, W. D.; Hearn, C. P.; Williams, J. B.

    1984-01-01

    Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.

  17. Electroencephalography epilepsy classifications using hybrid cuckoo search and neural network

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Damayanti, A.; Miswanto

    2017-07-01

    Epilepsy is a condition that affects the brain and causes repeated seizures. This seizure is episodes that can vary and nearly undetectable to long periods of vigorous shaking or brain contractions. Epilepsy often can be confirmed with an electrocephalography (EEG). Neural Networks has been used in biomedic signal analysis, it has successfully classified the biomedic signal, such as EEG signal. In this paper, a hybrid cuckoo search and neural network are used to recognize EEG signal for epilepsy classifications. The weight of the multilayer perceptron is optimized by the cuckoo search algorithm based on its error. The aim of this methods is making the network faster to obtained the local or global optimal then the process of classification become more accurate. Based on the comparison results with the traditional multilayer perceptron, the hybrid cuckoo search and multilayer perceptron provides better performance in term of error convergence and accuracy. The purpose methods give MSE 0.001 and accuracy 90.0 %.

  18. Calibration and compensation method of three-axis geomagnetic sensor based on pre-processing total least square iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Zhang, X.; Xiao, W.

    2018-04-01

    As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.

  19. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  20. Healing effects of Musa sapientum var. paradisiaca in diabetic rats with co-occurring gastric ulcer: cytokines and growth factor by PCR amplification

    PubMed Central

    2013-01-01

    Background The present study evaluates the effects of extract of Musa sapientum fruit (MSE) on ulcer index, blood glucose level and gastric mucosal cytokines, TNF-α and IL-1β and growth factor, TGF-α (affected in diabetes and chronic ulcer) in acetic acid (AA)-induced gastric ulcer (GU) in diabetic (DR) rat. Methods MSE (100 mg/kg, oral), omeprazole (OMZ, 2.0 mg/kg, oral), insulin (INS, 4 U/kg, sc) or pentoxyphylline (PTX, 10 mg/kg, oral) were given once daily for 10 days in 14 days post-streptozotocin (60 mg/kg, intraperitoneal)-induced diabetic rats while, the normal/diabetic rats received CMC for the same period after induction of GU with AA. Ulcer index was calculated based upon the product of length and width (mm2/rat) of ulcers while, TNF-α, IL-1β and TGF-α were estimated in the gastric mucosal homogenate from the intact/ulcer region. Phytochemical screening and HPTLC analysis of MSE was done following standard procedures. Results An increase in ulcer index, TNF-α and IL-1β were observed in normal (NR)-AA rat compared to NR-normal saline rat, which were further increased in DR-AA rat while, treatments of DR-AA rat with MSE, OMZ, INS and PTX reversed them, more so with MSE and PTX. Significant increase in TGF-α was found in NR-AA rat which did not increase further in DR-AA rat. MSE and PTX tended to increase while, OMZ and INS showed little or no effect on TGF-α in AA-DR rat. Phytochemical screening of MSE showed the presence of saponins, flavonoids, glycosides, steroids and alkaloids and HPTLC analysis indicated the presence of eight active compounds. Conclusion MSE showed antidiabetic and better ulcer healing effects compared with OMZ (antiulcer) or INS (antidiabetic) in diabetic rat and could be more effective in diabetes with concurrent gastric ulcer. PMID:24192345

  1. The Pathways from a Behavior Change Communication Intervention to Infant and Young Child Feeding in Bangladesh Are Mediated and Potentiated by Maternal Self-Efficacy.

    PubMed

    Zongrone, Amanda A; Menon, Purnima; Pelto, Gretel H; Habicht, Jean-Pierre; Rasmussen, Kathleen M; Constas, Mark A; Vermeylen, Francoise; Khaled, Adiba; Saha, Kuntal K; Stoltzfus, Rebecca J

    2018-02-01

    Although self-efficacy is a potential determinant of feeding and care behaviors, there is limited empirical analysis of the role of maternal self-efficacy in low- and middle-income countries. In the context of behavior change interventions (BCIs) addressing complementary feeding (CF), it is possible that maternal self-efficacy can mediate or enhance intervention impacts. In the context of a BCI in Bangladesh, we studied the role of maternal self-efficacy for CF (MSE-CF) for 2 CF behaviors with the use of a theoretically grounded empirical model of determinants to illustrate the potential roles of MSE-CF. We developed and tested a locally relevant scale for MSE-CF and included it in a survey (n = 457 mothers of children aged 6-24 mo) conducted as part of a cluster-randomized evaluation. Qualitative research was used to inform the selection of 2 intervention-targeted behaviors: feeding green leafy vegetables in the last 24 h (GLV) and on-time introduction of egg (EGG) between 6 and 8 mo of age. We then examined direct, mediated, and potentiated paths of MSE-CF in relation to the impacts of the BCI on these behaviors with the use of regression and structural equation modeling. GLV and EGG were higher in the intensive group than in the nonintensive control group (16.0 percentage points for GLV; P < 0.001; 11.2 percentage points for EGG; P = 0.037). For GLV, MSE-CF mediated (β = 0.345, P = 0.010) and potentiated (β = 0.390, P = 0.038) the effect of the intensive group. In contrast, MSE-CF did not mediate or potentiate the effect of the intervention on EGG. MSE-CF was a significant mediator and potentiator for GLV but not for EGG. The divergent findings highlight the complex determinants of individual specific infant and young child feeding behaviors. The study shows the value of measuring behavioral determinants, such as MSE-CF, that affect a caregiver's capability to adopt intervention-targeted behaviors.

  2. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  3. Usage-Centered Design Approach in Design of Malaysia Sexuality Education (MSE) Courseware

    NASA Astrophysics Data System (ADS)

    Chan, S. L.; Jaafar, A.

    The problems amongst juveniles increased every year, especially rape case of minor. Therefore, the government of Malaysia has introduced the National Sexuality Education Guideline on 2005. An early study related to the perception of teachers and students toward the sexuality education curriculum taught in secondary schools currently was carried out in 2008. The study showed that there are big gaps between the perception of the teachers and the students towards several issues of Malaysia sexuality education today. The Malaysia Sexuality Education (MSE) courseware was designed based on few learning theories approach. Then MSE was executed through a comprehensive methodology which the model ADDIE integrated with Usage-Centered Design to achieve high usability courseware. In conclusion, the effort of developing the MSE is hopefully will be a solution to the current problem that happens in Malaysia sexuality education now.

  4. MSE observatory: a revised and optimized astronomical facility

    NASA Astrophysics Data System (ADS)

    Bauman, Steven E.; Angers, Mathieu; Benedict, Tom; Crampton, David; Flagey, Nicolas; Gedig, Mike; Green, Greg; Liu, Andy; Lo, David; Loewen, Nathan; McConnachie, Alan; Murowinski, Rick; Racine, René; Salmon, Derrick; Stiemer, Siegfried; Szeto, Kei; Wu, Di

    2016-07-01

    The Canada-France-Hawaii-Telescope Corporation (CFHT) plans to repurpose its observatory on the summit of Maunakea and operate a (60 segment) 11.25m aperture wide field spectroscopic survey telescope, the Maunakea Spectroscopic Explorer (MSE). The prime focus telescope will be equipped with dedicated instrumentation to take advantage of one of the best sites in the northern hemisphere and offer its users the ability to perform large surveys. Central themes of the development plan are reusing and upgrading wherever possible. MSE will reuse the CFHT site and build upon the existing observatory infrastructure, using the same building and telescope pier as CFHT, while minimizing environmental impact on the summit. MSE will require structural support upgrades to the building to meet the latest building seismic code requirements and accommodate a new larger telescope and upgraded enclosure. It will be necessary to replace the current dome since a larger slit opening is needed for a larger telescope. MSE will use a thermal management system to remove heat generated by loads from the building, flush excess heat from lower levels, and maintain the observing environment temperature. This paper describes the design approach for redeveloping the CFHT facility for MSE. Once the project is completed the new facility will be almost indistinguishable on the outside from the current CFHT observatory. Past experience and lessons learned from CFHT staff and the astronomical community will be used to create a modern, optimized, and transformative scientific data collecting machine.

  5. Magnetic field amplitude and pitch angle measurements using Spectral MSE on EAST

    NASA Astrophysics Data System (ADS)

    Liao, Ken; Rowan, William; Fu, Jia; Li, Ying-Ying; Lyu, Bo; Marchuk, Oleksandr; Ralchenko, Yuri

    2017-10-01

    We have developed the Spectral Motional Stark Effect technique for measuring magnetic field amplitude and pitch angle on EAST. The experiments were conducted using the tangential co-injection heating beam at A port and Beam Emission Spectroscopy array at D port. A spatial calibration of the observation channels was conducted before the campaign. As a first check, the measured magnetic field amplitude was compared to prediction. Since the toroidal field is dominant, we recovered the expected 1/R shape over the spatial range 1.75

  6. Hierarchical kernel mixture models for the prediction of AIDS disease progression using HIV structural gp120 profiles

    PubMed Central

    2010-01-01

    Changes to the glycosylation profile on HIV gp120 can influence viral pathogenesis and alter AIDS disease progression. The characterization of glycosylation differences at the sequence level is inadequate as the placement of carbohydrates is structurally complex. However, no structural framework is available to date for the study of HIV disease progression. In this study, we propose a novel machine-learning based framework for the prediction of AIDS disease progression in three stages (RP, SP, and LTNP) using the HIV structural gp120 profile. This new intelligent framework proves to be accurate and provides an important benchmark for predicting AIDS disease progression computationally. The model is trained using a novel HIV gp120 glycosylation structural profile to detect possible stages of AIDS disease progression for the target sequences of HIV+ individuals. The performance of the proposed model was compared to seven existing different machine-learning models on newly proposed gp120-Benchmark_1 dataset in terms of error-rate (MSE), accuracy (CCI), stability (STD), and complexity (TBM). The novel framework showed better predictive performance with 67.82% CCI, 30.21 MSE, 0.8 STD, and 2.62 TBM on the three stages of AIDS disease progression of 50 HIV+ individuals. This framework is an invaluable bioinformatics tool that will be useful to the clinical assessment of viral pathogenesis. PMID:21143806

  7. A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong

    2001-01-01

    This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.

  8. Optimum nonparametric estimation of population density based on ordered distances

    USGS Publications Warehouse

    Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.

    1982-01-01

    The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.

  9. Ambiguity resolution for satellite Doppler positioning systems

    NASA Technical Reports Server (NTRS)

    Argentiero, P. D.; Marini, J. W.

    1977-01-01

    A test for ambiguity resolution was derived which was the most powerful in the sense that it maximized the probability of a correct decision. When systematic error sources were properly included in the least squares reduction process to yield an optimal solution, the test reduced to choosing the solution which provided the smaller valuation of the least squares loss function. When systematic error sources were ignored in the least squares reduction, the most powerful test was a quadratic form comparison with the weighting matrix of the quadratic form obtained by computing the pseudo-inverse of a reduced rank square matrix. A formula is presented for computing the power of the most powerful test. A numerical example is included in which the power of the test is computed for a situation which may occur during an actual satellite aided search and rescue mission.

  10. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    NASA Astrophysics Data System (ADS)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  11. Error analysis on squareness of multi-sensor integrated CMM for the multistep registration method

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Wang, Yiwen; Ye, Xiuling; Wang, Zhong; Fu, Luhua

    2018-01-01

    The multistep registration(MSR) method in [1] is to register two different classes of sensors deployed on z-arm of CMM(coordinate measuring machine): a video camera and a tactile probe sensor. In general, it is difficult to obtain a very precise registration result with a single common standard, instead, this method is achieved by measuring two different standards with a constant distance between them two which are fixed on a steel plate. Although many factors have been considered such as the measuring ability of sensors, the uncertainty of the machine and the number of data pairs, there is no exact analysis on the squareness between the x-axis and the y-axis on the xy plane. For this sake, error analysis on the squareness of multi-sensor integrated CMM for the multistep registration method will be made to examine the validation of the MSR method. Synthetic experiments on the squareness on the xy plane for the simplified MSR with an inclination rotation are simulated, which will lead to a regular result. Experiments have been carried out with the multi-standard device designed also in [1], meanwhile, inspections with the help of a laser interferometer on the xy plane have been carried out. The final results are conformed to the simulations, and the squareness errors of the MSR method are also similar to the results of interferometer. In other word, the MSR can also adopted/utilized to verify the squareness of a CMM.

  12. Evaluating refraction and visual acuity with the Nidek autorefractometer AR-360A in a randomized population-based screening study.

    PubMed

    Stoor, Katri; Karvonen, Elina; Liinamaa, Johanna; Saarela, Ville

    2017-11-30

    The evaluation of visual acuity (VA) and refraction in the Northern Finland Birth Cohort Eye study was performed using the Nidek AR-360A autorefractometer. The accuracy of the method for this population-based screening study was assessed. Measurements of the refractive error were obtained from the right eyes of 1238 subjects (mean age 47), first objectively with the AR-360A and then subjectively by an optometrist. Agreement with the subjective refraction was calculated for sphere, cylinder, mean spherical equivalent (MSE), cylindrical vectors J 45 and J 0 and presbyopic correction (add). Visual acuity (VA) was measured using an ETDRS chart and the autorefractometer. The refractive error measured with the AR-360A was higher than the subjective refraction performed by the optometrist for sphere (0.007 D ± 0.24 D p = 0.30) and also for cylinder (-0.16 D ± 0.20 D p < 0.0005). The bias between the measurements of MSE, J 45 and J 0 was low: -0.07 D ± 0.22 D (p = 0.002), 0.01 D ± 0.43 D (p = 0.25) and -0.01 D ± 0.42 D (p = 0.43), respectively. The amount of add measured by the autorefractometer was higher than the subjective 0.35 D ± 0.29 D (p < 0.0005). There was a statistically significant correlation between VA (p < 0.0005) and the difference between the subjective and objective refraction. In 99.2% of the measurements, visual values were within one decimal line of each other. The Nidek AR-360A autorefractometer is an accurate tool for determining the refraction and VA in a clinical screening trial. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. Random errors in interferometry with the least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Qi

    2011-01-20

    This investigation analyzes random errors in interferometric surface profilers using the least-squares method when random noises are present. Two types of random noise are considered here: intensity noise and position noise. Two formulas have been derived for estimating the standard deviations of the surface height measurements: one is for estimating the standard deviation when only intensity noise is present, and the other is for estimating the standard deviation when only position noise is present. Measurements on simulated noisy interferometric data have been performed, and standard deviations of the simulated measurements have been compared with those theoretically derived. The relationships havemore » also been discussed between random error and the wavelength of the light source and between random error and the amplitude of the interference fringe.« less

  14. A miniature low-cost LWIR camera with a 160×120 microbolometer FPA

    NASA Astrophysics Data System (ADS)

    Tepegoz, Murat; Kucukkomurler, Alper; Tankut, Firat; Eminoglu, Selim; Akin, Tayfun

    2014-06-01

    This paper presents the development of a miniature LWIR thermal camera, MSE070D, which targets value performance infrared imaging applications, where a 160x120 CMOS-based microbolometer FPA is utilized. MSE070D features a universal USB interface that can communicate with computers and some particular mobile devices in the market. In addition, it offers high flexibility and mobility with the help of its USB powered nature, eliminating the need for any external power source, thanks to its low-power requirement option. MSE070D provides thermal imaging with its 1.65 inch3 volume with the use of a vacuum packaged CMOS-based microbolometer type thermal sensor MS1670A-VP, achieving moderate performance with a very low production cost. MSE070D allows 30 fps thermal video imaging with the 160x120 FPA size while resulting in an NETD lower than 350 mK with f/1 optics. It is possible to obtain test electronics and software, miniature camera cores, complete Application Programming Interfaces (APIs) and relevant documentation with MSE070D, as MikroSens want to help its customers to evaluate its products and to ensure quick time-to-market for systems manufacturers.

  15. The boundary layer moist static energy budget: Convection picks up moisture and leaves footprints in the marine boundary layer

    NASA Astrophysics Data System (ADS)

    de Szoeke, S. P.

    2017-12-01

    Averaged over the tropical marine boundary layer (BL), 130 W m-2 turbulent surface moist static energy (MSE) flux, 120 W m-2 of which is evaporation, is balanced by upward MSE flux at the BL top due to 1) incorporation of cold air by downdrafts from deep convective clouds, and 2) turbulent entrainment of dry air into the BL. Cold saturated downdraft air, and warm clear air entrained into the BL have distinct thermodynamic properties. This work observationally quantifies their respective MSE fluxes in the central Indian Ocean in 2011, under different convective conditions of the intraseasonal (40-90 day) Madden Julian oscillation (MJO). Under convectively suppressed conditions, entrainment and downdraft fluxes export equal shares (60 W m-2) of MSE from the BL. Downdraft fluxes are more variable, increasing for stronger convection. In the convectively active phase of the MJO, downdrafts export 90 W m-2 from the BL, compared to 40 W m-2 by entrainment. These processes that control the internal, latent (condensation), and MSE of the tropical marine atmospheric BL determine the parcel buoyancy and strength of tropical deep convection.

  16. Least Squares Metric, Unidimensional Scaling of Multivariate Linear Models.

    ERIC Educational Resources Information Center

    Poole, Keith T.

    1990-01-01

    A general approach to least-squares unidimensional scaling is presented. Ordering information contained in the parameters is used to transform the standard squared error loss function into a discrete rather than continuous form. Monte Carlo tests with 38,094 ratings of 261 senators, and 1,258 representatives demonstrate the procedure's…

  17. Attenuation of the Squared Canonical Correlation Coefficient under Varying Estimates of Score Reliability

    ERIC Educational Resources Information Center

    Wilson, Celia M.

    2010-01-01

    Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…

  18. Characterization of frictional interference in closely-spaced reinforcements in MSE walls.

    DOT National Transportation Integrated Search

    2014-09-01

    This research addresses one of several knowledge gaps in the understanding of tall MSE wall behavior: prediction of reinforcement loads impacted by frictional interference of closely-spaced reinforcements associated with tall walls.

  19. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1985-01-01

    The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)

  20. Effects of Solar Geoengineering on Meridional Energy Transport and the ITCZ

    NASA Astrophysics Data System (ADS)

    Russotto, R. D.; Ackerman, T. P.; Frierson, D. M.

    2016-12-01

    The polar amplification of warming and the ability of the intertropical convergence zone (ITCZ) to shift to the north or south are two very important problems in climate science. Examining these behaviors in global climate models (GCMs) running solar geoengineering experiments is helpful not only for predicting the effects of solar geoengineering, but also for understanding how these processes work under increased CO2. Both polar amplification and ITCZ shifts are closely related to the meridional transport of moist static energy (MSE) by the atmosphere. In this study we examine changes in MSE transport in 10 fully coupled GCMs in Experiment G1 of the Geoengineering Model Intercomparison Project, in which the solar constant is reduced to compensate for abruptly quadrupled CO2 concentrations. In this experiment, poleward MSE transport decreases relative to preindustrial conditions in all models, in contrast to the CMIP5 abrupt4xCO2 experiment, in which poleward MSE transport increases. The increase in poleward MSE transport under increased CO2 is due to latent heat transport, as specific humidity increases faster in the tropics than at the poles; this mechanism is not present under G1 conditions, so the reduction in dry static energy transport due to a weakened equator-to-pole temperature gradient leads to weaker energy transport overall. Changes in cross-equatorial MSE transport in G1, meanwhile, are anticorrelated with shifts in the ITCZ. The northward ITCZ shift in G1 is 0.14 degrees in the multi-model mean and ranges from -0.33 to 0.89 degrees between the models. We examine the specific forcing and feedback terms responsible for changes in MSE transport in G1 by running experiments with a moist energy balance model. This work will help identify the largest sources of uncertainty regarding ITCZ shifts under solar geoengineering, and will help improve our understanding of the reasons for the residual polar amplification that occurs in the G1 experiment.

  1. Gene expression of porcine blastocysts from gilts fed organic or inorganic selenium and pyridoxine.

    PubMed

    Dalto, B D; Tsoi, S; Audet, I; Dyck, M K; Foxcroft, G R; Matte, J J

    2015-01-01

    In this study, we determined how maternal dietary supplementation with pyridoxine combined with different sources of selenium (Se) affected global gene expression of porcine expanded blastocysts (PEB) during pregnancy. Eighteen gilts were randomly assigned to one of the three experimental diets (n=6 per treatment): i) basal diet without supplemental Se or pyridoxine (CONT); ii) CONT+0.3 mg/kg of Na-selenite and 10 mg/kg of HCl-pyridoxine (MSeB610); and iii) CONT+0.3 mg/kg of Se-enriched yeast and 10 mg/kg of HCl-pyridoxine (OSeB610). All gilts were inseminated at their fifth post-pubertal estrus and killed 5 days later for embryo harvesting. A porcine embryo-specific microarray was used to detect differentially gene expression between MSeB610 vs CONT, OSeB610 vs CONT, and OSeB610 vs MSeB610. CONT gilts had lower whole blood Se and erythrocyte pyridoxal-5-P concentrations than supplemented gilts (P<0.05). No treatment effect was observed on blood plasma Se-glutathione peroxidase activity (P=0.57). There were 10, 247, and 96 differentially expressed genes for MSeB610 vs CONT, OSeB610 vs CONT, and OSeB610 vs MSeB610 respectively. No specific biological process was associated with MSeB610 vs CONT. However, for OSeB610 vs CONT, upregulated genes were related with global protein synthesis but not to selenoproteins. The stimulation of some genes related with monooxygenase and thioredoxin families was confirmed by quantitative real-time RT-PCR. In conclusion, OSeB610 affects PEB metabolism more markedly than MSeB610. Neither Se sources with pyridoxine influenced the Se-glutathione peroxidase metabolic pathway in the PEB, but OSeB610 selectively stimulated genes involved with antioxidant defense. © 2015 Society for Reproduction and Fertility.

  2. Development of LRFD resistance factors for mechanically stabilized earth (MSE) walls : [technical summary].

    DOT National Transportation Integrated Search

    2013-12-01

    Bridge approach embankments and many other : transportation-related applications make use of : reinforced earth retaining structures. Mechanically : Stabilized Earth (MSE) walls are designed under : the Load and Resistance Factor Design (LRFD) : meth...

  3. Performance assessment of MSE abutment walls in Indiana : final report.

    DOT National Transportation Integrated Search

    2017-05-01

    This report presents a numerical investigation of the behavior of steel strip-reinforced mechanically stabilized earth (MSE) direct bridge abutments under static loading. Finite element simulations were performed using an advanced two-surface boundin...

  4. MYB transcription factor gene involved in sex determination in Asparagus officinalis.

    PubMed

    Murase, Kohji; Shigenobu, Shuji; Fujii, Sota; Ueda, Kazuki; Murata, Takanori; Sakamoto, Ai; Wada, Yuko; Yamaguchi, Katsushi; Osakabe, Yuriko; Osakabe, Keishi; Kanno, Akira; Ozaki, Yukio; Takayama, Seiji

    2017-01-01

    Dioecy is a plant mating system in which individuals of a species are either male or female. Although many flowering plants evolved independently from hermaphroditism to dioecy, the molecular mechanism underlying this transition remains largely unknown. Sex determination in the dioecious plant Asparagus officinalis is controlled by X and Y chromosomes; the male and female karyotypes are XY and XX, respectively. Transcriptome analysis of A. officinalis buds showed that a MYB-like gene, Male Specific Expression 1 (MSE1), is specifically expressed in males. MSE1 exhibits tight linkage with the Y chromosome, specific expression in early anther development and loss of function on the X chromosome. Knockout of the MSE1 orthologue in Arabidopsis induces male sterility. Thus, MSE1 acts in sex determination in A. officinalis. © 2016 Molecular Biology Society of Japan and John Wiley & Sons Australia, Ltd.

  5. Modelling of the batch biosorption system: study on exchange of protons with cell wall-bound mineral ions.

    PubMed

    Mishra, Vishal

    2015-01-01

    The interchange of the protons with the cell wall-bound calcium and magnesium ions at the interface of solution/bacterial cell surface in the biosorption system at various concentrations of protons has been studied in the present work. A mathematical model for establishing the correlation between concentration of protons and active sites was developed and optimized. The sporadic limited residence time reactor was used to titrate the calcium and magnesium ions at the individual data point. The accuracy of the proposed mathematical model was estimated using error functions such as nonlinear regression, adjusted nonlinear regression coefficient, the chi-square test, P-test and F-test. The values of the chi-square test (0.042-0.017), P-test (<0.001-0.04), sum of square errors (0.061-0.016), root mean square error (0.01-0.04) and F-test (2.22-19.92) reported in the present research indicated the suitability of the model over a wide range of proton concentrations. The zeta potential of the bacterium surface at various concentrations of protons was observed to validate the denaturation of active sites.

  6. Simple Forest Canopy Thermal Exitance Model

    NASA Technical Reports Server (NTRS)

    Smith J. A.; Goltz, S. M.

    1999-01-01

    We describe a model to calculate brightness temperature and surface energy balance for a forest canopy system. The model is an extension of an earlier vegetation only model by inclusion of a simple soil layer. The root mean square error in brightness temperature for a dense forest canopy was 2.5 C. Surface energy balance predictions were also in good agreement. The corresponding root mean square errors for net radiation, latent, and sensible heat were 38.9, 30.7, and 41.4 W/sq m respectively.

  7. Bayesian demosaicing using Gaussian scale mixture priors with local adaptivity in the dual tree complex wavelet packet transform domain

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried

    2013-02-01

    In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.

  8. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  9. QualComp: a new lossy compressor for quality scores based on rate distortion theory

    PubMed Central

    2013-01-01

    Background Next Generation Sequencing technologies have revolutionized many fields in biology by reducing the time and cost required for sequencing. As a result, large amounts of sequencing data are being generated. A typical sequencing data file may occupy tens or even hundreds of gigabytes of disk space, prohibitively large for many users. This data consists of both the nucleotide sequences and per-base quality scores that indicate the level of confidence in the readout of these sequences. Quality scores account for about half of the required disk space in the commonly used FASTQ format (before compression), and therefore the compression of the quality scores can significantly reduce storage requirements and speed up analysis and transmission of sequencing data. Results In this paper, we present a new scheme for the lossy compression of the quality scores, to address the problem of storage. Our framework allows the user to specify the rate (bits per quality score) prior to compression, independent of the data to be compressed. Our algorithm can work at any rate, unlike other lossy compression algorithms. We envisage our algorithm as being part of a more general compression scheme that works with the entire FASTQ file. Numerical experiments show that we can achieve a better mean squared error (MSE) for small rates (bits per quality score) than other lossy compression schemes. For the organism PhiX, whose assembled genome is known and assumed to be correct, we show that it is possible to achieve a significant reduction in size with little compromise in performance on downstream applications (e.g., alignment). Conclusions QualComp is an open source software package, written in C and freely available for download at https://sourceforge.net/projects/qualcomp. PMID:23758828

  10. Determining Fuzzy Membership for Sentiment Classification: A Three-Layer Sentiment Propagation Model

    PubMed Central

    Zhao, Chuanjun; Wang, Suge; Li, Deyu

    2016-01-01

    Enormous quantities of review documents exist in forums, blogs, twitter accounts, and shopping web sites. Analysis of the sentiment information hidden in these review documents is very useful for consumers and manufacturers. The sentiment orientation and sentiment intensity of a review can be described in more detail by using a sentiment score than by using bipolar sentiment polarity. Existing methods for calculating review sentiment scores frequently use a sentiment lexicon or the locations of features in a sentence, a paragraph, and a document. In order to achieve more accurate sentiment scores of review documents, a three-layer sentiment propagation model (TLSPM) is proposed that uses three kinds of interrelations, those among documents, topics, and words. First, we use nine relationship pairwise matrices between documents, topics, and words. In TLSPM, we suppose that sentiment neighbors tend to have the same sentiment polarity and similar sentiment intensity in the sentiment propagation network. Then, we implement the sentiment propagation processes among the documents, topics, and words in turn. Finally, we can obtain the steady sentiment scores of documents by a continuous iteration process. Intuition might suggest that documents with strong sentiment intensity make larger contributions to classification than those with weak sentiment intensity. Therefore, we use the fuzzy membership of documents obtained by TLSPM as the weight of the text to train a fuzzy support vector machine model (FSVM). As compared with a support vector machine (SVM) and four other fuzzy membership determination methods, the results show that FSVM trained with TLSPM can enhance the effectiveness of sentiment classification. In addition, FSVM trained with TLSPM can reduce the mean square error (MSE) on seven sentiment rating prediction data sets. PMID:27846225

  11. Artificial neural network (ANN) method for modeling of sunset yellow dye adsorption using zinc oxide nanorods loaded on activated carbon: Kinetic and isotherm study.

    PubMed

    Maghsoudi, M; Ghaedi, M; Zinali, A; Ghaedi, A M; Habibi, M H

    2015-01-05

    In this research, ZnO nanoparticle loaded on activated carbon (ZnO-NPs-AC) was synthesized simply by a low cost and nontoxic procedure. The characterization and identification have been completed by different techniques such as SEM and XRD analysis. A three layer artificial neural network (ANN) model is applicable for accurate prediction of dye removal percentage from aqueous solution by ZnO-NRs-AC following conduction of 270 experimental data. The network was trained using the obtained experimental data at optimum pH with different ZnO-NRs-AC amount (0.005-0.015 g) and 5-40 mg/L of sunset yellow dye over contact time of 0.5-30 min. The ANN model was applied for prediction of the removal percentage of present systems with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) in the hidden layer with 6 neurons. The minimum mean squared error (MSE) of 0.0008 and coefficient of determination (R(2)) of 0.998 were found for prediction and modeling of SY removal. The influence of parameters including adsorbent amount, initial dye concentration, pH and contact time on sunset yellow (SY) removal percentage were investigated and optimal experimental conditions were ascertained. Optimal conditions were set as follows: pH, 2.0; 10 min contact time; an adsorbent dose of 0.015 g. Equilibrium data fitted truly with the Langmuir model with maximum adsorption capacity of 142.85 mg/g for 0.005 g adsorbent. The adsorption of sunset yellow followed the pseudo-second-order rate equation. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Artificial neural network (ANN) method for modeling of sunset yellow dye adsorption using zinc oxide nanorods loaded on activated carbon: Kinetic and isotherm study

    NASA Astrophysics Data System (ADS)

    Maghsoudi, M.; Ghaedi, M.; Zinali, A.; Ghaedi, A. M.; Habibi, M. H.

    2015-01-01

    In this research, ZnO nanoparticle loaded on activated carbon (ZnO-NPs-AC) was synthesized simply by a low cost and nontoxic procedure. The characterization and identification have been completed by different techniques such as SEM and XRD analysis. A three layer artificial neural network (ANN) model is applicable for accurate prediction of dye removal percentage from aqueous solution by ZnO-NRs-AC following conduction of 270 experimental data. The network was trained using the obtained experimental data at optimum pH with different ZnO-NRs-AC amount (0.005-0.015 g) and 5-40 mg/L of sunset yellow dye over contact time of 0.5-30 min. The ANN model was applied for prediction of the removal percentage of present systems with Levenberg-Marquardt algorithm (LMA), a linear transfer function (purelin) at output layer and a tangent sigmoid transfer function (tansig) in the hidden layer with 6 neurons. The minimum mean squared error (MSE) of 0.0008 and coefficient of determination (R2) of 0.998 were found for prediction and modeling of SY removal. The influence of parameters including adsorbent amount, initial dye concentration, pH and contact time on sunset yellow (SY) removal percentage were investigated and optimal experimental conditions were ascertained. Optimal conditions were set as follows: pH, 2.0; 10 min contact time; an adsorbent dose of 0.015 g. Equilibrium data fitted truly with the Langmuir model with maximum adsorption capacity of 142.85 mg/g for 0.005 g adsorbent. The adsorption of sunset yellow followed the pseudo-second-order rate equation.

  13. Comparison analysis between filtered back projection and algebraic reconstruction technique on microwave imaging

    NASA Astrophysics Data System (ADS)

    Ramadhan, Rifqi; Prabowo, Rian Gilang; Aprilliyani, Ria; Basari

    2018-02-01

    Victims of acute cancer and tumor are growing each year and cancer becomes one of the causes of human deaths in the world. Cancers or tumor tissue cells are cells that grow abnormally and turn to take over and damage the surrounding tissues. At the beginning, cancers or tumors do not have definite symptoms in its early stages, and can even attack the tissues inside of the body. This phenomena is not identifiable under visual human observation. Therefore, an early detection system which is cheap, quick, simple, and portable is essensially required to anticipate the further development of cancer or tumor. Among all of the modalities, microwave imaging is considered to be a cheaper, simple, and portable system method. There are at least two simple image reconstruction algorithms i.e. Filtered Back Projection (FBP) and Algebraic Reconstruction Technique (ART), which have been adopted in some common modalities. In this paper, both algorithms will be compared by reconstructing the image from an artificial tissue model (i.e. phantom), which has two different dielectric distributions. We addressed two performance comparisons, namely quantitative and qualitative analysis. Qualitative analysis includes the smoothness of the image and also the success in distinguishing dielectric differences by observing the image with human eyesight. In addition, quantitative analysis includes Histogram, Structural Similarity Index (SSIM), Mean Squared Error (MSE), and Peak Signal-to-Noise Ratio (PSNR) calculation were also performed. As a result, quantitative parameters of FBP might show better values than the ART. However, ART is likely more capable to distinguish two different dielectric value than FBP, due to higher contrast in ART and wide distribution grayscale level.

  14. Accelerating simultaneous algebraic reconstruction technique with motion compensation using CUDA-enabled GPU.

    PubMed

    Pang, Wai-Man; Qin, Jing; Lu, Yuqiang; Xie, Yongming; Chui, Chee-Kong; Heng, Pheng-Ann

    2011-03-01

    To accelerate the simultaneous algebraic reconstruction technique (SART) with motion compensation for speedy and quality computed tomography reconstruction by exploiting CUDA-enabled GPU. Two core techniques are proposed to fit SART into the CUDA architecture: (1) a ray-driven projection along with hardware trilinear interpolation, and (2) a voxel-driven back-projection that can avoid redundant computation by combining CUDA shared memory. We utilize the independence of each ray and voxel on both techniques to design CUDA kernel to represent a ray in the projection and a voxel in the back-projection respectively. Thus, significant parallelization and performance boost can be achieved. For motion compensation, we rectify each ray's direction during the projection and back-projection stages based on a known motion vector field. Extensive experiments demonstrate the proposed techniques can provide faster reconstruction without compromising image quality. The process rate is nearly 100 projections s (-1), and it is about 150 times faster than a CPU-based SART. The reconstructed image is compared against ground truth visually and quantitatively by peak signal-to-noise ratio (PSNR) and line profiles. We further evaluate the reconstruction quality using quantitative metrics such as signal-to-noise ratio (SNR) and mean-square-error (MSE). All these reveal that satisfactory results are achieved. The effects of major parameters such as ray sampling interval and relaxation parameter are also investigated by a series of experiments. A simulated dataset is used for testing the effectiveness of our motion compensation technique. The results demonstrate our reconstructed volume can eliminate undesirable artifacts like blurring. Our proposed method has potential to realize instantaneous presentation of 3D CT volume to physicians once the projection data are acquired.

  15. Optimal sensor placement for spatial lattice structure based on genetic algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian

    2008-10-01

    Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.

  16. A Computer-Aided Diagnosis System for Measuring Carotid Artery Intima-Media Thickness (IMT) Using Quaternion Vectors.

    PubMed

    Kutbay, Uğurhan; Hardalaç, Fırat; Akbulut, Mehmet; Akaslan, Ünsal; Serhatlıoğlu, Selami

    2016-06-01

    This study aims investigating adjustable distant fuzzy c-means segmentation on carotid Doppler images, as well as quaternion-based convolution filters and saliency mapping procedures. We developed imaging software that will simplify the measurement of carotid artery intima-media thickness (IMT) on saliency mapping images. Additionally, specialists evaluated the present images and compared them with saliency mapping images. In the present research, we conducted imaging studies of 25 carotid Doppler images obtained by the Department of Cardiology at Fırat University. After implementing fuzzy c-means segmentation and quaternion-based convolution on all Doppler images, we obtained a model that can be analyzed easily by the doctors using a bottom-up saliency model. These methods were applied to 25 carotid Doppler images and then interpreted by specialists. In the present study, we used color-filtering methods to obtain carotid color images. Saliency mapping was performed on the obtained images, and the carotid artery IMT was detected and interpreted on the obtained images from both methods and the raw images are shown in Results. Also these results were investigated by using Mean Square Error (MSE) for the raw IMT images and the method which gives the best performance is the Quaternion Based Saliency Mapping (QBSM). 0,0014 and 0,000191 mm(2) MSEs were obtained for artery lumen diameters and plaque diameters in carotid arteries respectively. We found that computer-based image processing methods used on carotid Doppler could aid doctors' in their decision-making process. We developed software that could ease the process of measuring carotid IMT for cardiologists and help them to evaluate their findings.

  17. The spatial return level of aggregated hourly extreme rainfall in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Shaffie, Mardhiyyah; Eli, Annazirin; Wan Zin, Wan Zawiah; Jemain, Abdul Aziz

    2015-07-01

    This paper is intended to ascertain the spatial pattern of extreme rainfall distribution in Peninsular Malaysia at several short time intervals, i.e., on hourly basis. Motivation of this research is due to historical records of extreme rainfall in Peninsular Malaysia, whereby many hydrological disasters at this region occur within a short time period. The hourly periods considered are 1, 2, 3, 6, 12, and 24 h. Many previous hydrological studies dealt with daily rainfall data; thus, this study enables comparison to be made on the estimated performances between daily and hourly rainfall data analyses so as to identify the impact of extreme rainfall at a shorter time scale. Return levels based on the time aggregate considered are also computed. Parameter estimation using L-moment method for four probability distributions, namely, the generalized extreme value (GEV), generalized logistic (GLO), generalized Pareto (GPA), and Pearson type III (PE3) distributions were conducted. Aided with the L-moment diagram test and mean square error (MSE) test, GLO was found to be the most appropriate distribution to represent the extreme rainfall data. At most time intervals (10, 50, and 100 years), the spatial patterns revealed that the rainfall distribution across the peninsula differ for 1- and 24-h extreme rainfalls. The outcomes of this study would provide additional information regarding patterns of extreme rainfall in Malaysia which may not be detected when considering only a higher time scale such as daily; thus, appropriate measures for shorter time scales of extreme rainfall can be planned. The implementation of such measures would be beneficial to the authorities to reduce the impact of any disastrous natural event.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Xiaoying; Ho, Shirley; Trac, Hy

    We investigate machine learning (ML) techniques for predicting the number of galaxies (N{sub gal}) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N{sub gal}. In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: supportmore » vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N{sub gal} by training our algorithms on the following six halo properties: number of particles, M{sub 200}, {sigma}{sub v}, v{sub max}, half-mass radius, and spin. For Millennium, our predicted N{sub gal} values have a mean-squared error (MSE) of {approx}0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to {approx}5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N{sub gal}. Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M{sub star}, low M{sub star}). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs.« less

  19. Using the Moist Static Energy Budget to Understand Storm Track Shifts across a Range of Timescales

    NASA Astrophysics Data System (ADS)

    Barpanda, P.; Shaw, T.

    2017-12-01

    Storm tracks shift meridionally in response to forcing across a range of time scales. Here we formulate a moist static energy (MSE) framework for storm track position and use it to understand storm track shifts in response to seasonal insolation, El Niño minus La Niña conditions, and direct (increased CO2 over land) and indirect (increased sea surface temperature) effects of increased CO2. Two methods (linearized Taylor series and imposed MSE flux divergence) are developed to quantify storm track shifts and decompose them into contributions from net energy (MSE input to the atmosphere minus atmospheric storage) and MSE flux divergence by the mean meridional circulation and stationary eddies. Net energy is not a dominant contribution across the time scales considered. The stationary eddy contribution dominates the storm-track shift in response to seasonal insolation, El Niño minus La Niña conditions, and CO2 direct effect in the Northern Hemisphere, whereas the mean meridional circulation contribution dominates the shift in response to CO2 indirect effect during northern winter and in the Southern Hemisphere during May and October. Overall, the MSE framework shows the seasonal storm-track shift in the Northern Hemisphere is connected to the stationary eddy MSE flux evolution. Furthermore, the equatorward storm-track shift during northern winter in response to El Niño minus La Niña conditions involves a different regime than the poleward shift in response to increased CO2 even though the tropical upper troposphere warms in both cases.

  20. Neurophysiological Basis of Multi-Scale Entropy of Brain Complexity and Its Relationship With Functional Connectivity.

    PubMed

    Wang, Danny J J; Jann, Kay; Fan, Chang; Qiao, Yang; Zang, Yu-Feng; Lu, Hanbing; Yang, Yihong

    2018-01-01

    Recently, non-linear statistical measures such as multi-scale entropy (MSE) have been introduced as indices of the complexity of electrophysiology and fMRI time-series across multiple time scales. In this work, we investigated the neurophysiological underpinnings of complexity (MSE) of electrophysiology and fMRI signals and their relations to functional connectivity (FC). MSE and FC analyses were performed on simulated data using neural mass model based brain network model with the Brain Dynamics Toolbox, on animal models with concurrent recording of fMRI and electrophysiology in conjunction with pharmacological manipulations, and on resting-state fMRI data from the Human Connectome Project. Our results show that the complexity of regional electrophysiology and fMRI signals is positively correlated with network FC. The associations between MSE and FC are dependent on the temporal scales or frequencies, with higher associations between MSE and FC at lower temporal frequencies. Our results from theoretical modeling, animal experiment and human fMRI indicate that (1) Regional neural complexity and network FC may be two related aspects of brain's information processing: the more complex regional neural activity, the higher FC this region has with other brain regions; (2) MSE at high and low frequencies may represent local and distributed information processing across brain regions. Based on literature and our data, we propose that the complexity of regional neural signals may serve as an index of the brain's capacity of information processing-increased complexity may indicate greater transition or exploration between different states of brain networks, thereby a greater propensity for information processing.

  1. MSE wall void repair effect on corrosion of reinforcement - phase 2 : specialty fill materials.

    DOT National Transportation Integrated Search

    2015-08-01

    This project provided information and recommendations for material selection for best : corrosion control of reinforcement in mechanically stabilized earth (MSE) walls with void repairs. The : investigation consisted of small- and large-scale experim...

  2. Modeling and analysis to quantify MSE wall behavior and performance.

    DOT National Transportation Integrated Search

    2009-08-01

    To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...

  3. Assessing corrosion of MSE wall reinforcement.

    DOT National Transportation Integrated Search

    2010-09-01

    The primary objective of this study was to extract reinforcement coupons from select MSE walls and document the extent of corrosion. In doing this, a baseline has been established against which coupons extracted in the future can be compared. A secon...

  4. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  5. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  6. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  7. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  8. Super-linear Precision in Simple Neural Population Codes

    NASA Astrophysics Data System (ADS)

    Schwab, David; Fiete, Ila

    2015-03-01

    A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.

  9. Geospatial distribution modeling and determining suitability of groundwater quality for irrigation purpose using geospatial methods and water quality index (WQI) in Northern Ethiopia

    NASA Astrophysics Data System (ADS)

    Gidey, Amanuel

    2018-06-01

    Determining suitability and vulnerability of groundwater quality for irrigation use is a key alarm and first aid for careful management of groundwater resources to diminish the impacts on irrigation. This study was conducted to determine the overall suitability of groundwater quality for irrigation use and to generate their spatial distribution maps in Elala catchment, Northern Ethiopia. Thirty-nine groundwater samples were collected to analyze and map the water quality variables. Atomic absorption spectrophotometer, ultraviolet spectrophotometer, titration and calculation methods were used for laboratory groundwater quality analysis. Arc GIS, geospatial analysis tools, semivariogram model types and interpolation methods were used to generate geospatial distribution maps. Twelve and eight water quality variables were used to produce weighted overlay and irrigation water quality index models, respectively. Root-mean-square error, mean square error, absolute square error, mean error, root-mean-square standardized error, measured values versus predicted values were used for cross-validation. The overall weighted overlay model result showed that 146 km2 areas are highly suitable, 135 km2 moderately suitable and 60 km2 area unsuitable for irrigation use. The result of irrigation water quality index confirms 10.26% with no restriction, 23.08% with low restriction, 20.51% with moderate restriction, 15.38% with high restriction and 30.76% with the severe restriction for irrigation use. GIS and irrigation water quality index are better methods for irrigation water resources management to achieve a full yield irrigation production to improve food security and to sustain it for a long period, to avoid the possibility of increasing environmental problems for the future generation.

  10. Methods for estimating the magnitude and frequency of peak streamflows at ungaged sites in and near the Oklahoma Panhandle

    USGS Publications Warehouse

    Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.

    2015-09-28

    Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.

  11. Development of LRFD resistance factors for mechanically stabilized earth (MSE) walls.

    DOT National Transportation Integrated Search

    2013-12-01

    Over 100 centrifuge tests were conducted to assess Load and Resistance Factor : Design (LRFD) resistance factors for external stability of Mechanically Stabilized Earth (MSE) walls : founded on granular soils. In the case of sliding stability, the te...

  12. Design parameters and methodology for mechanically stabilized earth (MSE) walls.

    DOT National Transportation Integrated Search

    2014-10-01

    Since its appearance in 1970s, mechanically stabilized earth (MSE) walls have become a majority among all types of retaining walls due to their economics and satisfactory performance. The Texas Department of Transportation (TxDOT) has primarily adopt...

  13. Of Perspectives, Issues, and Politics in Materials Technology

    NASA Astrophysics Data System (ADS)

    Promisel, Nathan E.

    1985-12-01

    Recognizing the pervasive importance of materials science and engineering (MSE) to practically every facet of man’s life, this lecture takes a broad view of the origin and technical trends and achievements in MSE, briefly reviewing its history and relationship to society over many millennia, to the present day, with specific examples. Major emphasis, however, is placed on modern MSE as related to current national issues, using as illustrations of the latter natural resources, industry and the economy, research and development, education, and technology transfer. The discussion of these areas leads to consideration of the role of the Federal Government and the importance of and need for a coherent national policy to deal with critical issues, many of which are listed herein. Some important steps by the Government fostering high level coordination as well as cooperation among government, industry, and academe are cited. Having thus illustrated the pervasive and vital impact of MSE on society, and its current esteemed recognition and position of influence, the lecture concludes that in this period of global change — social, economic, and technological — there is a challenge to MSE to respond beneficially to societal needs more than ever before. The opportunity and mechanisms now exist. Greater participation in the public and political arenas, with mutual education, is indicated.

  14. Safety factor profiles from spectral motional Stark effect for ITER applications

    NASA Astrophysics Data System (ADS)

    Ko, Jinseok; Chung, Jinil; Wi, Han Min

    2017-10-01

    Depositions on the first mirror and multiple reflections on the other mirrors in the labyrinth of the optical system in the motional Stark effect (MSE) diagnostic for ITER are regarded as one of the main obstacles to overcome. One of the alternatives to the present-day conventional photoelastic-modulation-based MSE principles is the spectroscopic analyses on the motional Stark emissions where either the ratios among individual Stark multiplets or the amount of the Stark split are measured based on precise and accurate atomic data and models to ultimately provide the critical internal constraints in the magnetic equilibrium reconstruction. Equipped with the PEM-based conventional MSE hardware since 2015, the KSTAR MSE diagnostic system is capable of investigating the feasibility of the spectroscopic MSE approach particularly via comparative studies with the PEM approach. Available atomic data and models are used to analyze the beam emission spectra with a high-spectral-resolution spectrometer with a patent-pending dispersion calibration technology. Experimental validation on the atomic data and models is discussed in association with the effect of the existence of mirrors, the Faraday rotation in the relay optics media, and the background polarized light on the measured spectra. Work supported by the Ministry of Science, ICT and Future Planning, Korea.

  15. [NIR Assignment of Magnolol by 2D-COS Technology and Model Application Huoxiangzhengqi Oral Liduid].

    PubMed

    Pei, Yan-ling; Wu, Zhi-sheng; Shi, Xin-yuan; Pan, Xiao-ning; Peng, Yan-fang; Qiao, Yan-jiang

    2015-08-01

    Near infrared (NIR) spectroscopy assignment of Magnolol was performed using deuterated chloroform solvent and two-dimensional correlation spectroscopy (2D-COS) technology. According to the synchronous spectra of deuterated chloroform solvent and Magnolol, 1365~1455, 1600~1720, 2000~2181 and 2275~2465 nm were the characteristic absorption of Magnolol. Connected with the structure of Magnolol, 1440 nm was the stretching vibration of phenolic group O-H, 1679 nm was the stretching vibration of aryl and methyl which connected with aryl, 2117, 2304, 2339 and 2370 nm were the combination of the stretching vibration, bending vibration and deformation vibration for aryl C-H, 2445 nm were the bending vibration of methyl which linked with aryl group, these bands attribut to the characteristics of Magnolol. Huoxiangzhengqi Oral Liduid was adopted to study the Magnolol, the characteristic band by spectral assignment and the band by interval Partial Least Squares (iPLS) and Synergy interval Partial Least Squares (SiPLS) were used to establish Partial Least Squares (PLS) quantitative model, the coefficient of determination Rcal(2) and Rpre(2) were greater than 0.99, the Root Mean of Square Error of Calibration (RM-SEC), Root Mean of Square Error of Cross Validation (RMSECV) and Root Mean of Square Error of Prediction (RMSEP) were very small. It indicated that the characteristic band by spectral assignment has the same results with the Chemometrics in PLS model. It provided a reference for NIR spectral assignment of chemical compositions in Chinese Materia Medica, and the band filters of NIR were interpreted.

  16. Brain Connectivity Alterations Are Associated with the Development of Dementia in Parkinson's Disease.

    PubMed

    Bertrand, Josie-Anne; McIntosh, Anthony R; Postuma, Ronald B; Kovacevic, Natasha; Latreille, Véronique; Panisset, Michel; Chouinard, Sylvain; Gagnon, Jean-François

    2016-04-01

    Dementia affects a high proportion of Parkinson's disease (PD) patients and poses a burden on caregivers and healthcare services. Electroencephalography (EEG) is a common nonevasive and nonexpensive technique that can easily be used in clinical settings to identify brain functional abnormalities. Only few studies had identified EEG abnormalities that can predict PD patients at higher risk for dementia. Brain connectivity EEG measures, such as multiscale entropy (MSE) and phase-locking value (PLV) analyses, may be more informative and sensitive to brain alterations leading to dementia than previously used methods. This study followed 62 dementia-free PD patients for a mean of 3.4 years to identify cerebral alterations that are associated with dementia. Baseline resting state EEG of patients who developed dementia (N = 18) was compared to those of patients who remained dementia-free (N = 44) and of 37 healthy subjects. MSE and PLV analyses were performed. Partial least squares statistical analysis revealed group differences associated with the development of dementia. Patients who developed dementia showed higher signal complexity and lower PLVs in low frequencies (mainly in delta frequency) than patients who remained dementia-free and controls. Conversely, both patient groups showed lower signal variability and higher PLVs in high frequencies (mainly in gamma frequency) compared to controls, with the strongest effect in patients who developed dementia. These findings suggest that specific disruptions of brain communication can be measured before PD patients develop dementia, providing a new potential marker to identify patients at highest risk of developing dementia and who are the best candidates for neuroprotective trials.

  17. Interaction between drilled shaft and mechanically stabilized earth (MSE) wall : technical report.

    DOT National Transportation Integrated Search

    2017-04-01

    Drilled shafts are being constructed within the reinforced zone of mechanically stabilized earth (MSE) walls especially in the case of overpass bridges where the drilled shafts carry the bridge deck or traffic signs. The interaction between the drill...

  18. Pullout resistance of mechanically stabilized earth wall steel strip reinforcement in uniform aggregate.

    DOT National Transportation Integrated Search

    2015-11-01

    A wide range of reinforcement-backfill combinations have been used in mechanically stabilized earth (MSE) walls. Steel : strips are one type of reinforcement used to stabilize aggregate backfill through anchorage. In the current MSE wall design, pull...

  19. 0-6716 : design parameters and methodology for mechanically stabilized earth (MSE) walls.

    DOT National Transportation Integrated Search

    2013-08-01

    Since their appearance in the 1970s, mechanically : stabilized earth (MSE) walls have become a majority : among all types of retaining walls due to their economics : and satisfactory performance. The Texas Department of : Transportation (TxDOT) has p...

  20. Evaluation of geofabric in undercut on MSE wall stability.

    DOT National Transportation Integrated Search

    2011-04-01

    Compaction of granular base materials at sites with fine grained native soils often causes unwanted : material loss due to penetration at the base. In 2007, ODOT began placing geotextile fabrics in the : undercut of MSE walls at the interface of the ...

Top